text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
#include <Generic_Sequence_T.h>
Default constructor.
Constructor with control of ownership.
Copy constructor.
Destructor. strings,().
Returns the length of the sequence.
Set a new length for the sequence.
Return the maximum length of the sequence.
Assignment operator.
Get a const element from the sequence.
Get an element from the sequence.
Returns the state of the sequence release flag.
Allows the buffer underlying a sequence to be replaced. The parameters to replace() are identical in type, order, and purpose to those for the <T *data> constructor for the sequence.
The buffer with all the elements.
The current number of elements in the buffer.
The maximum number of elements the buffer can contain.
If true then the sequence should release the buffer when it is destroyed. | https://www.dre.vanderbilt.edu/Doxygen/6.0.6/html/libtao-doc/a00060.html | CC-MAIN-2022-40 | refinedweb | 123 | 62.64 |
Pymol.stored
The pymol.stored helper variable serves as a namespace for user defined globals which are accessible from iterate-like commands. The iterate commands by default expose the pymol module namespace as the globals dictionary, so pymol.stored is accessible as stored, and (user defined) members as stored.membername.
Example
Count number of atoms with a PyMOL script:
stored.count = 0 iterate all, stored.count += 1 print("number of atoms:", stored.count)
Count number of atoms with a Python script:
from pymol import cmd from pymol import stored stored.count = 0 cmd.iterate("all", "stored.count += 1") print("number of atoms:", stored.count)
Problems
There is no guarantee that other scripts or plugins don't use the same stored member names. It is recommended that properly written plugins use the "space" argument with cmd.iterate and cmd.alter to define their own global namespace. | https://pymolwiki.org/index.php/Pymol.stored | CC-MAIN-2021-39 | refinedweb | 145 | 53.78 |
tcpoly_infer
implement what's described in "Partial polymorphic type inference and higher-order unification"
object Test {
def meh[M[_], A](x: M[A]): M[A] = x
meh{(x: Int) => x} // should solve ?M = [X] X => X and ?A = Int ...
}
You can find that paper here:. I hope this is useful to others who stumble upon this issue. Now if only I understood what the paper were talking about...
Further work on that problem is described in "Unification via explicit substitutions: The case of higher-order patterns" -, from Dowek, Hardin, Kirchner and Pfenning (Pfenning is the author of the above paper). I am not an expert, but this work is cited in Pierce's TAPL (Sec. 23.6, pag. 355), which explains that unlike the algorithm above it is terminating and always finds a unique solution; the paper suggests that this work applies to an important subcase. However, I cannot fully judge myself the respective merits of the two papers.
With dependent method types we have a somewhat cumbersome, but just about usable workaround. The key idea is to use implicits to destructure a type, and then pick out and use its component parts as dependent types.
// Template we will unpack into
trait UnpackM[MA] {
type M[_]
type A
// Destructuring implicits
implicit def unpackM1[M0[_], A0] = new UnpackM[M0[A0]] {
type M[X] = M0[X]
type A = A0
implicit def unpackM2[M0[_, _], A0, B0] = new UnpackM[M0[A0, B0]] {
type M[X] = M0[A0, X]
type A = B0
def meh[MA](x : MA)(implicit u : UnpackM[MA]) : (u.M[String], List[u.A]) = null
// ^^ ^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^ ^^^
// (1) (2) (3) (3)
// 1: Type variable being destructured
// 2: Destructuring implicit
// 3: Uses of (dependent) component types
val m = meh{(x: Int) => x}
implicitly[m.type <:< (Int => String, List[Int])]
See also here for a slightly more elaborate scalaz inspired example.
Milles Sabin, your workaround is interesting:
But unfortunately the example you link still needs these two lines:
type EitherInt[A] = Either[Int, A]
implicit object EitherTC extends TC[EitherInt]
Moreover, your code does not even correctly solve the original problem - it only infers that Int => Int matches M0[A0, B0] with M0 = Function1, not that it matches M1[A1] with M1[X] = X => X.
See also SI-6744, where we would like to infer these sort of types in pattern matching.
Would it be any easier to just look for partial applications of existing type constructors in left-to-right order? Haskell does roughly this (actually, type constructors are just curried, see below), it is tractable, and people don't seem to have an issue with the limitation, though occasionally you do have to introduce a new type just to flip the order of some type parameters.
Here's an example:
case class State[S,A](run: S => (A,S))
object Blah {
def foo[M[_],A](m: M[A]): M[A] = m
...
val blah = foo(State((i: Int) => ("hi",i+1)))
}
Another way to think about this is that type constructors are curried, so State[S,A] is actually represented as Ap(Ap(State, S), A), and likewise the M[A] is Ap(freevar, A) - these unify with freevar = Ap(State, S).
To Paul's comment, I agree that looking at the type parameters as curried, partially applied from left to right (SI-4719) is probably usually good enough.
If State[S, A] has pseudo-Haskell-ish kind (*, *) -> *, then State[S][A] would have kind * -> * -> *, and State[S] would have kind * -> *.
It seems somehow I never mentioned higher-order pattern unification by Miller, which is what Agda uses — the paper I linked ("Unification via explicit substitutions: The case of higher-order patterns") is indeed about that. For whatever reason, everybody steers away from the original Huet algorithm, including modern implementation of lambdaProlog (which is what "Partial Polymorphic Type Inference and Higher-Order Unification" uses).
Oh, jira.
SI-2712 closed!! Dewey defeats Truman!!
That was cruel and heartless ...
My heart rate went up for a moment...
long, wistful sigh
well, that was (briefly) exciting!
I keep on hitting this issue in various forms, most recently when trying to use type lambdas to fill in one of two types on a * -> * -> * kinded type. I have this type:
```
case class RangeAs[R, V](range: R, lower: V, upper: V)
object RangeAs {
trait Aux[V]
}
```
There are various implicit instances with RangeAs.Aux[X]#l as one of their type parameters. They are not found in all situations if I attempt to summon them using the lambda. However, if I do this:
```
type RangeAsX[T] = RangeAs.Aux[X]#l[T]
```
Now if I summon them using RangeAsX, they are found.
For the watchers of this JIRA that haven't seen this on Twitter, Miles posted a demo project exercising a proposed fix for this JIRA:
PR against 2.12.x here:.
Backport to 2.11.8 available here.
2.11.9 PR: | https://issues.scala-lang.org/si/jira.issueviews:issue-html/SI-2712/SI-2712.html | CC-MAIN-2018-51 | refinedweb | 819 | 60.14 |
Find the least frequent character in a string in Python
This Python tutorial will teach you how to find the least frequent character in a string. In a Python program, sometimes we may need to do some operation on the least occurring character in a given string. Which is why we need to know different ways to do so. Let’s understand it further in this tutorial.
We will discuss the following ways to get the least occurring character in a string. Be thorough with the codes to grasp the concept.
Method 1
This is a simple and naive method. Here is the algorithm.
- Create an empty dictionary.
- Use a for loop to iterate through all the characters in the string.
- If the character already exists in the dictionary, increment its value by 1. Otherwise, initialize its value with 1. Note that the keys for the dictionary are the characters of the given string.
- The obtained dictionary contains values of different characters in the key-value pairs. Find the key with minimum value.
- Typecast it to string and print it.
Have a look at the code to get the logic of the program.
string = "aabbcddeeff" dict ={} for character in string: if character in dict: dict[character]+=1 else: dict[character]=1 print("The least frequent character is", str(min(dict, key = dict.get)))
Output:
The least frequent character is c
Method 2
Another method to solve this problem makes the use of the collection library. Collection library has a Counter() which can be used to store the frequency of all the characters of the string. Now if we want to find the minimum occurring character in the string, we will need to use the min() function as we did in the previous method. See the code for a better understanding.
import collections string = "aabbcddeeff" count =collections.Counter(string) print("The least frequent character is", str(min(count, key = count.get)))
Output:
The least frequent character is c
Thank you.
Also read: Find the most frequent value in a list in Python | https://www.codespeedy.com/find-the-least-frequent-character-in-a-string-in-python/ | CC-MAIN-2021-43 | refinedweb | 341 | 67.25 |
This article explains how to get started with SDL2 in Linux. For other SDL2 tutorials (including setting up on Windows), check out my SDL2 Tutorials at Programmer’s Ranch.
Using apt-get
If you’re on a Debian-based Linux distribution, you can use the APT package manager to easily install the SDL2 development libraries. From a terminal, install the
libsdl2-dev package:
sudo apt-get install libsdl2-dev
Installing from source
If for whatever reason you can’t use a package manager, you’ll have to compile SDL2 from the source code. To do this, you first have to download SDL2:
After extracting the archive to a folder,
cd to that folder and run:
./configure
When it’s done, run:
make all
Finally, run:
sudo make install
Testing it out
To verify that you can compile an SDL2 program, use the following code (it’s the same used in my “SDL2: Setting up SDL2 in Visual Studio 2010” article at Programmer’s Ranch):
#include <SDL2/SDL.h> int main(int argc, char ** argv) { SDL_Init(SDL_INIT_EVERYTHING); SDL_Quit(); return 0; }
You can use
vi or your favourite editor to create the source code:
To compile this (assuming it’s called
sdl2minimal.c), use the following command:
gcc sdl2minimal.c -lSDL2 -lSDL2main -o sdl2minimal
We need to link in the SDL2 libraries, which is why we add the
-lSDL2 -lSDL2main. Be aware that those start with a lowercase L, not a 1. The program should compile. It won’t show you anything if you run it, but now you know that you’re all set up to write SDL2 programs on Linux.
2 thoughts on “How to set up SDL2 on Linux”
I found this very helpful, thanks for the post. I thought setting up libsdl would be much more difficult but I got through this with no problems in under 7 minutes. | https://gigi.nullneuron.net/gigilabs/how-to-set-up-sdl2-on-linux/ | CC-MAIN-2021-31 | refinedweb | 311 | 67.38 |
> ASYNC13.rar > PUTC.C
/* A module of ASYNCx.LIB version 1.02 */ #include
#include "asyncdef.h" /************************************************************** Send the character in c to the port associated with p. If there are no characters in the output buffer, wait for the UART to be ready to transmit another character, and send the character to the UART. Otherwise, wait (if necessary) for the output buffer to be able to accept another character, and put the character into the output buffer. This function returns the value of the character in c if it is is able to either be sent to the port or put into the output buffer to be sent later. Latter, when XON/XOFF logic is added to this routine, a -1 will be returned if an XOFF has been received and the output buffer is full. **************************************************************/ int a_putc(int c,register ASYNC *p) {byte *bp; /* ** If the output buffer of this port is not empty, attempt to put the ** character into the output buffer so that it can be sent later. ** Otherwise, just send the character directly to the UART so that it ** can be transmitted. */ if (p->ocount) {/* ** Point bp to the position in the output buffer immediately after ** the current head of the buffer. This will allow us to see if the ** buffer can accept any more characters or not. */ bp=p->obufhead+1; if (bp==p->obufend) bp=p->obuf; /* ** Wait until the output buffer can accept another character. Then ** put the character into the output buffer. If the buffer is full ** and an XOFF character has been received, return a value of -1. */ while(bp==p->obuftail) if (p->recvxoff) return -1; /* ** Put the character into the output buffer and return the value of ** the character. */ *(p->obufhead)=c; p->obufhead=bp; p->ocount++; return c; } else {/* ** Wait for the UART to be ready to accept another character to be ** transmitted. After it is ready, give the new character to the UART ** so that it can be transmitted, and return the value of the ** characeter. */ while((inportb(p->base+LSR) & 0x20) == 0); outportb(p->base+DATA,(unsigned char)c); return c; } } /* end of a_putc(c,p) */ | http://read.pudn.com/downloads9/sourcecode/comm/fax/32646/ASYNC13/PUTC.C__.htm | crawl-002 | refinedweb | 360 | 62.58 |
Talk:Euroipods
From Uncyclopedia, the content-free encyclopedia
"Featured!"
edit Comment
This page makes all of the jesii cry. --47Monkey MUN HMRFRA s7fc | Talk 05:25, 2 Dec 2005 (UTC)
I'm confused. WTF? Feature? AAAAA! --Poofers 06:43, 2 Dec 2005 (UTC)
Check the original version to get the idea. Typos in the article shouldn't be fixed! --Ruubio 07:05, 2 Dec 2005 (UTC)
The sheer audacity of having this deletable peice-of-crap as a featured article makes it worthy of being a featured article. Bravo!--Putz 07:18, 2 Dec 2005 (UTC)
This crap ain't the top quality crap I'm used to see in the front page. I quote: "WFT?". --212.147.119.42 13:12, 2 Dec 2005 (UTC)
Wow. I guess the yay and nay voting system for Features really is an ignorable policy. Always surprising me, Uncyclopedia. --Isra1337 13:23, 2 Dec 2005 (UTC)
- That's what I've been trying to say the whole time. --[[User:Nintendorulez|Nintendorulez | talk]] 16:53, 15 Dec 2005 (UTC)
Can I vote here for deletion? Can I? Huh? --)
- Tag it if you dare mate, but you're on your own with this one. :) -- T. (talk) 19:00, 2 Dec 2005 (UTC)
I have a feeling this is really a complicated joke involving featuring complete crap... --Chronarion 18:57, 2 Dec 2005 (UTC)
WTF; this is awful. I mean, I've written crap, but I don't try to get it on the front page. Mindspillage 20:16, 2 Dec 2005 (UTC)
I get the joke but it's very, very Lame... I mean like "Why wouldn't they let the stinkbug into the movie? HE HAD ONLY ONE SCENT AND IT WASN'T ENOUGH!" more lame than that lame. Bleek II 20:23, 2 Dec 2005 (UTC)
- There was no joke int he article. It's pure, unadulterated genius as it originally was, and not the crappy hack that someone made it into--Sir Flammable KUN 21:38, 2 Dec 2005 (UTC)
Too stupid to be featured. -:41, 2 Dec 2005 (UTC)
- Maybe YOU'RE too stupid to be featured. Ever think about that? Huh? Huh? Yeah punk! Yeah, this is getting old. I waiting for RC to put a new one up.--Sir Flammable KUN 21:44, 2 Dec 2005 (UTC)
- ...Okay... you mean my user page? - 21:49, 2 Dec 2005 (UTC)
edit Protect?
Why protect something like this one-liner? Shouldn't it be moved to Undictionary:46, 2 Dec 2005 (UTC)
- There's humor here and you're missing it.--Sir Flammable KUN 21:50, 2 Dec 2005 (UTC)
- I don't see any humor either. Explain the joke to me. --[[User:Nintendorulez|Nintendorulez | talk]] 16:55, 7 Dec 2005 (UTC)
edit Talk Page
- Keep this piece of crap just for the talk page... I love this talk page so much, it is my only friend and distraction. What would I do without it? Tell me what?? I cut my wrist today, should become a nice scar tomorrow. Pizza. Madretsma 19:50, 17 April 2007 (UTC)
edit Rationale
Okay, so there seems to be some dissension as to whether Euroipods deserves to be the featured article. I'm here to provide you with a well-researched and thought-out answer: Yes. That is all, go back to work. --—rc (t) 23:56, 2 Dec 2005 (UTC)
I'm here to provide you with a well-researched and thought-out reply to your answer:Go fuck yourself. That is all, go back to wanking. (This one-liner wrote by an unknown genious should be featured.--200.161.136.227 20:54, 31 December 2006 (UTC))
edit Joke?
I think perhaps all the debate would be settled if somebody would explain the joke here, if any. --[[User:Nintendorulez|Nintendorulez | talk]] 01:40, 3 Dec 2005 (UTC)
This may explain the joke: MISTA T SAYS "CLICK HERE, SUCKA" If that doesn't do the trick, you should watch some Ed Wood movies. --Putz 07:58, 3 Dec 2005 (UTC)
Oh, I see, it's scarcely-meaningful pretentious bullshit. Gotcha. --Carlos the Mean 13:51, 3 Dec 2005 (UTC)
I still don't find this article funny. --[[User:Nintendorulez|Nintendorulez | talk]] 18:10, 3 Dec 2005 (UTC)
To quote Einstein the Mean "This shit is crap" I didn't say that Einstein did, how this got onto the frontpage noone knows. --Mr Gridenko 01:30, 4 Dec 2005 (UTC)
I was a skeptic as to weather or not this should actually be a featured article, at least at first. It pales in comparission to "J.D. Salinger", "AAAAAAAAA!", and "Redundancy" (and "Redundancy" for that matter). I however, though not compleatly, changed my mind after reading the article submitted by Putz above. Although I'm not quite sure if theres some deeper joke here, I do belive that it's "Featured Article" material. Lets all just agree that somthing new isin't always bad, and sometimes somthing new is gooder then somthing old. I only hope that no one tries to replicate it.--MrJimmy 05:32, 4 Dec 2005 (UTC)
- I agree. Making an article like this a featured article was damn funny this time. Doing it again would be just plain stupid. --Putz 22:58, 4 Dec 2005 (UTC)
- No, it wasn't funny, it as shit. Nothing about this article is funny in any way, shape or form. It's just a one-liner about freeipods under a different name. Normally this stuff gets QVFD'd in an instant. And to make matters worse the article has been locked to prevent people from making it funny. --[[User:Nintendorulez|Nintendorulez | talk]] 01:27, 5 Dec 2005 (UTC)
- It IS funny (please note that, if nothing else, the fact that I bolded and uppercased the word "is" makes Euroipods funny), but quite obviously not everyone will think that. Since some (Nintendorulez) have yet to stop argueing about it however, I will provide the joke behind Euroipods; it is as follows: Euroipods, when boiled down, isin't funny at all, but that, in essence, is what makes it funny (enough commas?). This artcle has a ban however, if anyone gets the bright idea that posting anymore true "one liners," it will cease to be funny and just be annoying.--MrJimmy 04:25, 6 Dec 2005 (UTC)
- That... doesn't make sense... --[[User:Nintendorulez|Nintendorulez | talk]] 12:00, 7 Dec 2005 (UTC)
- Just really quick I just wanted to note that I JUST NOW got the double negitive in the article... boy am I stupid...--MrJimmy Journal talk 05:56, 7 Dec 2005 (UTC)
- Another thing that's funny about Euroipods is the fact that this:
Euroipods A website giving away free ipods in return for a) money b) reffering freinds to do the same
- managed to inspire a talk page almost as good as Tourette's Syndrome's.--Putz 08:34, 6 Dec 2005 (UTC)
- Unfortunatly Tourette's Syndrome's talk page had a purpose, the people in there were really trying to argue for somthing that could be considered "just." We're just in here deciding if giving away free iPods in Europe, over the internet, is funny if you put it in encyclopedia form.--MrJimmy 16:24, 6 Dec 2005 (UTC)
Okay guys, here's the deal. I want you to compile a list of things you think are funny and then those will be the rules by which I choose featured articles. Here is a empty, pre-formatted list for you.
edit Things that are funny to me
1.Not Euroipods, which makes me very confused and upset
2.
3.
4.
5.
Please sign your name so that I know who to laugh at. --—rc (t) 02:04, 5 Dec 2005 (UTC)
- 1. Not Euroipods, which makes me confused and very upset.
- 2. Things that have a joke.
- 3. AAA AAAAAAAAA, AAAAA AAAAA AA AAAAAAAA AAA AAAA AAAAA.
- Seriously, explain the joke here. It just looks like a crappy page that failed to get QVFD'd.
- --[[User:Nintendorulez|Nintendorulez | talk]] 12:45, 5 Dec 2005 (UTC)
- 1. Euroipods, which makes me confused and very upset.
- 2. Everything else in the featured article page.
- 3. Mostly every other article ont he Uncyclopedia (With the exeption of any of the ones I wrote)--MrJimmy 04:31, 6 Dec 2005 (UTC)
- 1. Not Euroipods, which makes me confused but strangely horny.
- 2. Nipple Clamps.
- 3. Beating French people.
- 4. Hamsters
--AOL Vandal
- 1. Not Euroipods, because it is sad and lame.
- 2. Euroipods fans who are equally sad and lame.
--Emmzee 19:16, 23 July 2006 (UTC)
edit THIS IS NOT FUNNY
-Akanin
- Jesus, dinosaurs, dell dude. It's old news. Perhaps you should've read the discussion page before starting a whole new subheading. --[[User:Nintendorulez|Nintendorulez | talk]] 19:47, 5 Dec 2005 (UTC)
OK, I totally don't get it.
-L
edit lol another subheading
Because I can. --—rc (t) 04:20, 6 Dec 2005 (UTC)
edit Why Euroipods is funny:
Still there!
It was getting to crouded up there so I have moved down here to finally break down every possible joke within the article, specifically so that Nintendrulez will go away.
- Reason 1: As was provided above, this article is funny because it is not funny. It may seem absurd but it is the truth. See also this article from the prestigous University of Binghamton: [1]
- Reason 2: If you will notice (although you'd have to be as slow as me not to notice) the fact that in the first line of the article it states: "A website giving away free ipods..." Now note the second line, sub-devision a, it states "money," as in they charge for their "free iPods."
- Reason 3:...
- Reason 4:Profit or Prophet
- Reason 5: The talk page.
(There must be other reasons, sombody else put one in.)----MrJimmy Journal talk 18:20, 7 Dec 2005 (UTC)
- I added Reason 3 and 4 in becuase I was feeling stupid. Hoepyou don't mind --Sir Flammable KUN 18:28, 7 Dec 2005 (UTC)
- Heh, thanks. That livened it up a little, I added the "Prophet" onto the reason 4.--MrJimmy Journal talk 00:57, 8 Dec 2005 (UTC)
- So that's one "joke" in it's entirity, and it's a lame one. Not funny, not frontpage material. --[[User:Nintendorulez|Nintendorulez | talk]] 23:23, 7 Dec 2005 (UTC)
- Just rest assured that your only one unimportant person on the internet, and while you may be able to gather an army of people like you, I will always be here to fight them off. And it is too funny!--MrJimmy Journal talk 00:57, 8 Dec 2005 (UTC)
- If there's a joke, than what is it? --[[User:Nintendorulez|Nintendorulez | talk]] 01:40, 8 Dec 2005 (UTC)
- Your still pushing this? Your still asking what the joke is after I've provided it in many different, not to mention redundant, forms? Your presistant...--MrJimmy Journal talk 05:43, 8 Dec 2005 (UTC)
- It's not a joke if it's not funny. --[[User:Nintendorulez|Nintendorulez | talk]] 02:23, 10 Dec 2005 (UTC)
- Let's solve this by giving you the reason we give people who whine about fair bans: "Because we felt like it." It's silly, stupid, unexpected. Just let it go. -.-' --Sir Flammable KUN 01:42, 8 Dec 2005 (UTC)
Added by jerk IP user:213.121.151.134:
- Reason 5:Everyone likes the logical fallacy of having a page that 'is funny because it isn't funny'. Or do we. It's like that time white became black because it wasn't black at the time. It's like me calling you stupid if EVERYONE KNOWS you're smart. Now you can logically suck my phallus.
edit euroipods (video game)
It needs to be made. --Savethemooses 00:11, 8 Dec 2005 (UTC)
- It could be a fighting game, where people battle it out to see if Euroipods is really funny.----MrJimmy Journal talk 01:01, 8 Dec 2005 (UTC)
- Genius! --Carlos the Mean 06:07, 9 Dec 2005 (UTC)
That would be funny, but then we would have to keep this page....
Done.
edit A Resolution.
Naaaa na na na na na na, na na Katamari Damaci!--MrJimmy Journal talk 05:49, 8 Dec 2005 (UTC)
I don't mean to sound like I'm pushing it or somthing, but I think I wrote some pretty funny comentary stuff in my journal about all this. Also has anyone ever made a Talk:Talk:"Article here" page?--MrJimmy Journal talk 20:03, 8 Dec 2005 (UTC)
- I've made Talk:Talk:Main Page, but I think it's been deleted. Note that the tab link to the main article is red, despite Talk:Main Page clearly existing. --[[User:Nintendorulez|Nintendorulez | talk]] 22:40, 8 February 2006 (UTC)
- Well, I made Talk:Talk:Euroipods, and Splaka reverted it, and removed my post here saying I made it. Go figure. --User:Nintendorulez 18:50, 19 April 2006 (UTC)
- I didn't feel like surgically removing all your links to it, because I don't care enough to. I just rolled you back. The reason it is annoying is because it triggers as an orphaned talk page (which it is, and in most cases, those should not exist for main article space). So, stop making them, kthx. --Splaka 04:28, 20 April 2006 (UTC)
- It was linked to on this page when I made it, so wouldn't that mean it's not orphaned? I though orphaned pages means nothing links to them. --User:Nintendorulez 16:01, 20 April 2006 (UTC)
edit CapItols?
In the advertIsement (See the top), they capItolIze the letter "i" in iPods, Europe Is WeIrd.--MrJimmy Journal talk 05:38, 10 Dec 2005 (UTC)
- 'Tis a Marketing Technique....it works because your brain recognises that there shouldn't be a capital letter in the middle of the word, making it stand out GUYS! ENOUGH WITH THE DRAMA, you're making ED envious.
It was a joke. Jay Oh Kay Ee. Joke. Let it be. You're all beating it to death over the one day it was up. --Chronarion 05:20, 14 Dec 2005 (UTC)
I found this in the dictionary : Joke n funny activity or story.
Notice it says funny. So it wasn't really a joke, now was it?
- Dammit, I wanted to bring up the fact that it's not a joke. --[[User:Nintendorulez|Nintendorulez | talk]] 20:09, 18 Dec 2005 (UTC)
edit So why was it featured?
Sorry to bring this all up again, but how can you possibly say the reason was "because we like it" if it was voted strongly against on VFH? Surely that demonstrates that we don't actually like it at all? I mean, what is the point of VFH if not to gauge what we like? And surely this talk page proves that most of us don't like it too... --Carlos the Mean 01:07, 13 Dec 2005 (UTC)
- Sorry to thwart your continuing this arguement Carl, but quite abviously, "we" does not include you. "We" includes everyone who liked it, and there were people who liked it. So it is written and so it shall be featured.--MrJimmy Journal talk 05:51, 13 Dec 2005 (UTC)
- Amen! However, as the complaining has gained shrillness and lost humor (and I figure I've already "won," anyway), I won't be reverting changes to Euroipods anymore. That's right, if someone wants to expand Euroipods, go ahead. Now Nintendorulez can stop paying for psychiatric therapy and maybe buy a
brainsense of humor with the extra money. "It's not a joke. It's not a joke. It's not a joke I'm so upset IT'S NOT A JOKE!" --—rc (t) 06:36, 13 Dec 2005 (UTC)
- So you mean we can fuck it up all we want now? This'll be fun...
- Good. Once this becomes a decent article, we can put this whole thing behind us and pretend it was decent when it was first featured. Then it will no longer be a disgrace to Uncyclopedia. --[[User:Nintendorulez|Nintendorulez | talk]] 19:48, 13 Dec 2005 (UTC)
- My point is that we does include me, and everybody in Uncyclopedia, hence the voting. If you just said that 'we' always includes the people that like it, then everything on VFH should automatically be featured... but the point is, Uncyclopedia (or at least VFH) is democratic and this was outvoted as a featureable article. I just think whoever's responsible should admit that they're wrong, because otherwise I can see the admins continuing to do things just because they like it, and not adhering to the wiki spirit. --Carlos the Mean 00:30, 14 Dec 2005 (UTC)
- I was responsible. I don't admit that I was wrong. I admit to doing something that amused some people, ticked off some others, and resulted in a fairly hilarious talk page - all the result of less than a day of featurization. I also believe that it was done in the Uncyclopedia spirit. Heaven forbid the admins do something unexpected/undemocratic that doesn't even affect the users aside from making them divert their eyes from a little box on the front page for a few hours. --—rc (t) 01:44, 14 Dec 2005 (UTC)
- You're right, actually, this is pretty Uncyclopedic. But the talk page isn't funny. That comment makes all of the Jesii cry. I think you should explain why it's funny in three or more bullet points... --Carlos the Mean 05:03, 14 Dec 2005 (UTC)
- Very well. From above (you have to wait for the punchline):
- "Okay, so there seems to be some dissension as to whether Euroipods deserves to be the featured article. I'm here to provide you with a well-researched and thought-out answer: Yes. That is all, go back to work." --Rcmurphy Sq.W (Talk) 23:56, 2 Dec 2005 (UTC)
- Waaaaait for it...
- "I'm here to provide you with a well-researched and thought-out reply to your answer:Go fuck yourself. That is all, go back to wanking." - IP
- Tell me you don't find that funny. --—rc (t) 05:24, 14 Dec 2005 (UTC)
- So that's one "joke" in it's entirity, and it's a lame one. Not funny, not frontpage material. --Carlos the Mean 10:06, 14 Dec 2005 (UTC)
- Wow, I'm the guy who wrote that one, and it only took 2 seconds to think of it. It wasn't even meant to be all that funny.
- I just "expanded" the article. You remember you said you wouldn't revert it, right?
- So were you just acting randomly, or were you taking the piss out of the stupid stuff people people vote to be featured sometimes? Because I know people do elect some pretty stupid stuff. (AAAAAAAAA!)
- Unlike Euroipods, AAAAAAAAA! had an actual democratic route to featured article and theme of the day. Mostly due to the wonderful efforts of an AOL vandal to try to vandalize it, giving it overwhelming anti-vandal sympathy and community support. Heh heh heh. --Splaka 23:10, 19 Dec 2005 (UTC)
Look, AOL guy who hates Euroipods, I don't care what you think/write about me or Euroipods, as long as you write it on this talk page or my user talk page. Quit vandalizing other pages. It's pathetic. --—rc (t) 04:39, 20 Dec 2005 (UTC)
edit That fight image is awesome
Nuff said. --[[User:Nintendorulez|Nintendorulez | talk]] 16:53, 15 Dec 2005 (UTC)
Holy snuff! Thats the first time I've ever been recognised on the internets, thanks for making my life.--MrJimmy Journal talk 03:30, 18 Dec 2005 (UTC)
Whoever made that pic kicks ass. --[[User:Nintendorulez|Nintendorulez | talk]] 20:09, 18 Dec 2005 (UTC)
Also, one ought to be made for Talk:Tourette's Syndrome. DrPoodle vs. World. --[[User:Nintendorulez|Nintendorulez | talk]] 19:42, 19 Dec 2005 (UTC)
This is the best I could do with MSPaint, if anyone else wants to replace it with a much better Photoshop version and put it in the correct place, go ahead. I know it's horrible, and in the wrong talk page, but Nin asked for it here.--MrJimmy Journal talk 07:04, 20 Dec 2005 (UTC)
edit Look, here's my point for why it shouldn't have been featured
Here it was on VF)
According to all my math knowledge, the overall vote was against. And other articles had much, much, much more "for" votes and much less "against" votes than this one did. So why the hell were the rules broken and this crap featured? --[[User:Nintendorulez|Nintendorulez | talk]] 12:47, 23 Dec 2005 (UTC)
- "Rules"? --—rc (t) 00:53, 24 Dec 2005 (UTC)
- I've got it! I know why. To make Nintendo "Rules" incredably upset.--MrJimmy Journal talk 05:17, 24 Dec 2005 (UTC)
- Haha, very funny. Look, I'm trying to be serious here. Save the humor for the articles. Just because you're an admin doesn't justify violating concensus like this.
- Okay. As Dictator, I hereby authorize that one time violation of consensus. Yes, there is a pseudo democracy on this wiki. Yes, it is generally followed. Was it followed for this case? No. However, this was a couple of weeks ago, and you're dragging this out longer than Election 2000. The past ain't gonna change, so i'll take this day in North Korea to adjunct the rules. --Chronarion 22:41, 25 Dec 2005 (UTC)--70.107.134.109 22:39, 25 Dec 2005 (UTC)
Score: -8.5 --AAA! (AAAA) 10:33, 30 January 2007 (UTC)
edit Hey...
I think I finally get why this was funny. --Spooner 17:56, 26 Dec 2005 (UTC)
You've come back to the light side of the Euroipods.--MrJimmy Journal talk 06:29, 29 Dec 2005 (UTC)
Well, hurry up and explain it then! We're all waiting.
Me too!!!! It's funny because it's crap! *points to article* Tee-hee! --Timmytiptoe my talk 06:04, 28 July 2006 (UTC)
edit Moral of the Story
If you get rcmurphy on your side you cant lose cuz hes god and everyone knows it so bow now or be yelled at and embaressed for the rest of your life.(like nintendorulez) --4.252.210.192 02:19, 27 Dec 2005 (UTC)(Tompkins)
- That is correct, good sir. --—rc (t) 02:22, 27 Dec 2005 (UTC)
- Suck up. --[[User:Nintendorulez|Nintendorulez | talk]] 11:50, 3 Jan 2006 (UTC)
- Nuh uh, You're a suck:02, 7 Jan 2006 (UTC)
- And you're a... 3 year old?--MrJimmy Journal talk 04:35, 9 Jan 2006 (UTC)
- And your mom is... gay?:09, 22 January 2006 (UTC)
- Swish. Nice comeback! Almighty Stove 00:45, 26 May 2007 (UTC)
- Super get a real page --Mahmoud Ahmadinejad 02:43, 14 December 2007 (UTC)
edit Has ED started an article on this yet?
Not that they matter at all anymore, as we have swiftly crushed them in the Wiki Wars, but this stuff is (as Chron mentioned earlier in the page) exactly what those Cretins at encyc. dramaeaeaetica love to write about. --Savethemooses 16:39, 29 Dec 2005 (UTC)
edit It is fargin war!
Hey, where is my free Euroipod! What kind of scam is this anyway? Does Steve Jobs know about this kind of rip-off? How come I keep getting more spam email and my buddies that I signed up are mad at me? I live in the USA and this is discrimination because I don't live in Europe, isn't it? I want my free Euroipod, dammit! Don't make me come over there and invade Europe with my army of pirates and ninjas. Honestly, who is going to stop us? The UN? They don't even have an Army to sanction me with. What nations can stop us, France, don't make me laugh, The Netherlands, bunch of liberal wankers there. I mean youth socialism has created a lot of Eurotrash who cannot fight. What is to stop me and my army from walking in and taking our free Euroipods? Oh wait, it is from Eastern Europe? After the fall of the USSR, their economy is a joke. I could buy one of their cities for the price of a USA iPod. I can make a killing selling VHS copies of 1970's and 1980's USA TV Shows, because they are starved for good programs. Nah I think I'll plan the invasion, tell CNN and Fox News what I am up to, and bring back all the Euroipods to the USA that I can. --Orion Blastar 01:59, 30 Dec 2005 (UTC)
- Orion Blastar said: "The Netherlands, bunch of liberal wankers there." WHAT???????? (I am Dutch) --Timmytiptoe my talk 06:07, 28 July 2006 (UTC)
- I apologize The Netherlands are not a bunch of liberal wankers. I misspoke in a fit of anger. They are all nice people and kind to others, just don't eat the brownies at the Dutch coffee shops. --Lt. Sir Orion Blastar (talk) 03:19, 27 March 2008 (UTC)
edit I see the article has been "improved" by Nintendorulez
Congratulations on being able to keep that "not funny" message box there for so long. I thought some dipshit would've deleted it by now. --64.12.117.5 01:17, 3 Jan 2006 (UTC)
- HA!--MrJimmy Journal talk 05:50, 3 Jan 2006 (UTC)
- See, it stayed because it's the truth. Apparently people agree with me that it isn\'t funny. --[[User:Nintendorulez|Nintendorulez | talk]] 11:50, 3 Jan 2006 (UTC)
Well actually people would have obayed it by now if it were that true.--MrJimmy Journal talk 04:19, 5 Jan 2006 (UTC)
Dammit, Flammable accused me of vandalizing the page and banned me for 24 days. --Nintendorulez
AHHHH, 5 Jan 2006 (UTC)
- I don't like SPAM!--MrJimmy Journal talk 16:30, 6 Jan 2006 (UTC)
- Can we infer from the lack of objections that the punishment fits the crime? :) -- T. (talk) 16:48, 6 Jan 2006 (UTC)
- Nope. Us smart people are just taking our time to object. --207.200.116.197 21:10, 8 Jan 2006 (UTC)
- I can see why Nintendorulez would be pissed about there being a Nin on the page. Can we make it Anais Nin? Then it's funnier and it's obviously not Nintendorulez--RudolfRadna 21:54, 26 April 2006 (UTC)
edit Testimonials...
Does anyone else NOT like the testimonials?--MrJimmy Journal talk 16:33, 6 Jan 2006 (UTC)
I do, and I put them. Right. *Sniff*--MrJimmy Journal talk 05:26, 7 Jan 2006 (UTC)
- Personally, I am quite pissed off that my name is there supporting euripods. I hate it a lot, and I want that testimonial removed, or at least the name changed. I'd do it myself, but I have a feeling I'd get banned for it. --[[User:Nintendorulez|Nintendorulez | talk]] 17:35, 29 January 2006 (UTC)
- Yes, because when people see "Nin," they automatically think "Nintendorulez." --—rc (t) 17:46, 29 January 2006 (UTC)
- Well who the hell else would it refer to? It's an obvious attempt at being subtle and pissing me off. If it really isn't meant to refer to me in any way, then I suppose you wouldn't mind changing the name there. --[[User:Nintendorulez|Nintendorulez | talk]] 23:59, 1 February 2006 (UTC)
- Of course the people who have been involved in the Euroipods saga know who it refers to. But they know about your role in the whole thing anyway, with or without the testimonial. New people are the ones who I was referring to. --—rc (t) 05:17, 4 February 2006 (UTC)
- It's still an uncalled for joke at my expense, and as such I should be the one who decides whether or not it stays. I WANT IT REMOVED. --[[User:Nintendorulez|Nintendorulez | talk]] 12:36, 7 February 2006 (UTC)
- Isn't there a big, big joke at the expense of that Poodle guy in the tourette's article? And weren't you the guy on the Talk page extracting most of the urine? Telling him to basically shut his mouth and piss off? LOL! Nin, u rly rly rly rly rly rly rly rly rly rly rly rly rly suck the seven balls off your crackwhore of a mom, dude--82.44.21.151 18:37, 15 July 2006 (UTC)
- But David, there's no mention of "DrPoodle" or any derivative of his username in the main Tourette's Syndrome article. I suppose you could add one, though... Besides, User:DrPoodle is only an occasional contributor, whereas Nintendorulez is a regular contributor. So really, it's not quite the same thing. I'm in this position myself actually, though to a somewhat lesser extent because most of the pages that attack me are obscure sock-puppet user pages that hardly anything links to. But the same principle applies: Using main article space to attack established contributors, even if only the user being attacked notices it, and even if it might actually be sort of funny, is just bad for business. (Unless of course it's User:Mhaille who's being attacked, in which case it's highly encouraged!) c • > • cunwapquc? 18:59, 15 July 2006 (UTC)
- A bite! I havn't argued over the internet in years! So basically, it's okay to insult someone if you don't mention their name? And so long as they arn't a regular contributor? But shurly in either case the contributor is a HUMAN BEING!!! And surely the only reason why it is significant to put a person's name in an article, for as the guy in the Miller play says 'Because it is my name! Because I will not have another like it...IN MY LIFE!,' is because it is hurtful to that person. As such, given that the quantity of contribution is irrelevant and the cause of concern to each party is the hurt engendered by the content of personally orientated parts of articals - THEN surely sir, there is nothing to distinguish in terms of reprehensibility either the attack on Poodle and the wryyyyyyyyysubtle attack on Bintendo RUlz? yah? You must see - they are quite the same thing on a most basic level. let me lay it down:
- Both contributors are members of the human race foremost, posters on UNCY someway less and even further back they are different sorts of posters. We must concern ourselves with the fact that both are humans with human feelings, so where the hurt of feelings is the object of an action, both of these agents should be equally regarded when we are considering the impact of the hurting of the feelings in question.
- But are both of the examples of hurtful things the same? Yes you daft cunt! They both twist a person's words to hurt them. So why should we spare mentioning Nin's name? We want to hurt him! Using his name is one of way of doing it, like making the Poodle doctor look like he types all wrong!!!!!!! And as he, Nin, enjoys 'hurting' people, the pleasure should be immediate to his recall. 'Ah yes, now I recall when I was ripping the piss out of that uppity shithead spastic. lol I shure showed him!!!!!!!!!!!!!!!!11' So yeah, what's Nin's problem? He knows why it's funny to do this to people. it's fun to hurt people, people are all the same and so hurting different people still the same kind of hurting, so if you don't like being hurt by people then don't hurt people. because then you are doing something which is exactly the same as the thing you then say should not be done.
- ftw!
- Twisting words and meanings as usual, I see. Oh well, I guess I have to go over this point by point: Yes, I'd say it's okay to insult someone in the main namespace as long as you don't mention their name, or any derivative of it, because that way the rest of the world doesn't necessarily know who or what you're referring to. There's a level of deniability that both you and the victim lose when you start naming names. Honestly, if I were insulted here in such a way that I knew, for a fact, that no other user on this site had any idea as to who the insult was directed at, then sure - why would I care? In fact, I could probably just go in and edit the offending material and only me and the original offender would know why I did that. But that's not the case here. And of course being a regular contributor makes a difference. This is a community, and User:Nintendorulez is a member of this community whether we like it or not. He must be given a greater amount of respect on that basis alone, even if he deserves the insult, because if we don't, everyone else here is going to be thinking "will I be next?" It's what we call a "chilling effect," and it makes people leave, and often go somewhere else and bad-mouth us, at a time when we can't afford a lot of defections.
- And no, it doesn't matter, and wouldn't matter, if there were other, legitimate reasons for using the name "Anais Nin" in the article. The point is that we, the members of this community, know what it means, we know why it's being done, and we know why the people who are doing it are refusing to stop. And even if we totally support them, there's nothing especially priceless about that gag (if it can even be called that) beyond its intent that makes it worth keeping. Maybe it's just my opinion, but better-quality stuff gets deleted here every hour of every day, and few people cry over it. And as for User:DrPoodle, personally I'm willing to give him the benefit of the doubt too — if he wanted those talk page entries to disappear, I'd support that. But again, his name isn't used in the actual Tourette's article, and that makes a difference, at least to me. And really, the whole question of whether or not it's fun, or funny, to hurt people is something I see as immaterial — but even there, your premise is flawed. Offensiveness used in the service of humor is fine, but humor used in the service of offensiveness is not (even if we occasionally do allow it, for whatever reason). Particularly if the offensiveness is directed at a known individual.
- Last but not least, you resort to this equally-flawed notion that both situations are the same simply because both Nintendorulez and DrPoodle are human beings. (Technically, they're anonymous textual avatars of entities that might be human beings, but that's of little relevance here.) That's simply a straw-man argument. People get psychologically hurt all the time, by all kinds of things — it's unavoidable as long as people have emotions and feelings. But once again, you're trying to apply Wikipedia-based notions of civility to this site, and that's simply wrong. I understand why you do it, and I even understand why civility is essential, but in this context, civility has a completely different meaning. In effect, we have to convince everyone, and especially n00bs and AnonIP's, that we don't take anything seriously, and that they shouldn't either. There's far too much offensive content here to try to make the claim that offending people is wrong and "should not be done" — that would be complete hypocrisy. And obviously, it would be nice if Nintendorulez would simply shut up and not complain about the Anais Nin reference, but he isn't doing that. And as long as he isn't doing that, we have to weigh the benefit of consistently maintaining the policy of never backing down against the negativity this situation is generating. I don't think the negativity is worth it. Two wrongs don't make a right. It's a simple difference of opinion, and I'm not even saying you're wrong in any objective sense. In fact, even after all that's happened, I still have some degree of respect for you. But your hiding behind an AnonIP while in the midst of a deletion spree in order to call me a "daft cunt," when it's obvious who you are, isn't going to convince me you're right, either. c • > • cunwapquc? 02:17, 16 July 2006 (UTC)
- Why does this always happen? Yes I could type tl;dr, but that would be so bloody facile. And I'm simply not that cool either. So it's wrong to take things seriously, but if you take things seriously enough then you should have your demands acceded to in order to shut you up? kewl. Offensiveness in humour is ok, but humour in offensiveness is not? But shurly, shur, the Nin thing is humour on account of it's offensiveness? Right? Yeah I-I'm right. So it's offensiveness in the service of humour as well as being offensiveness in the guise of humour - but for god's sake, if we're going to be offensive then SOMEONE is going to get hurt! In this case it's Nin, and in that case I think it's allowable because his outrage is also the cornerstone of an interesting hypocrisy. CWIM (sea wot eye mean?)?
- Two wrongs don't don't make a right? That sounds dubiously moral for uncy - isn't an eye for an eye a better adege for this place? Not 'let's all behave and get along and then we'll all behave and get along.' Christ that's SO Wikipedia it makes me want to shove needles into the joins of my index finger and dislocate it!
- As you can probably tell, I'm having difficulty finding the patience to fully understand you. You appear to contradict yourself on points of manners. Please, find the time to put it simple. Then perhaps my answers will make sense, until that point you not only resemble a daft cunt. You are a daft cunt. Furthermore, elongated mutant pretentiousness in language will never impress me no matter how very hard your try XD ftw! P.S. I'm so confused, who the hell do you think I am anyway?--82.44.21.151 18:20, 16 July 2006 (UTC)
- I'm not the one who put the expletives into DrPoodle's quote. I initially took a Wikipedia article, and inserted the swears. Then he complained, and someone put the swears in his complaint, all went nuts from there. I would be against the changing of DrPoodle's post if it weren't for two things:
- He blanked the main article with it, rather than leaving a simple complaint on the talk page. IMHO, that makes the content fair game. Fisher Price, anyone?
- It's too much of an integral part of the joke to change at this point. We have the parody complaints from China and whatnot.
- So it's not quite the same. --User:Nintendorulez 00:52, 18 July 2006 (UTC)
- Nin, is that you? I'll pretend i've got a file detailing the various points of this argument sitting right next to me and refrain from going 'What the fuck is not quite the same?!' Yeah, because I'm totally totally up-to-date on this thing, swim? So basically - what are talking about, what isn't the same? And are you saying that taking the piss out of tourette's is ok if someone with tourette's pisses you off, in the same way that bombing Dresden was ok because the nazis really pissed everyone off? Answers on a soggy brown postcard, plx! XXXXD btw, if I have not already won, then now i win.--82.44.21.151 19:29, 18 July 2006 (UTC)
- Sorry dude, but this is Uncyclopedia, and we don't "dumb it down" for anybody, especially someone who can't even be bothered to register an account. So I'm afraid you not only don't win, you lose, and in fact I win, because I say so. Is that more understandable, Mister Twistwords? c • > • cunwapquc? 07:57, 19 July 2006 (UTC)
- I can't even figure out what the hell he just said... But I've explained the differences. He vandalized the article with his letter. So it was part of the article from the beginning, and released under the GFDL. We simply made his letter funny, and he wanted to remove his contribution. The difference here is that I never vandalized this article, someone just put an attack on me into it. --User:Nintendorulez 15:49, 19 July 2006 (UTC)
- Mr. Twistwords here! Dumb-down? I'm only asking you to make sense, for christ's sake! Do it for christ, you insane fucktard. The point is, if there is a point, and if there is a point, at all, then the point is that I, Mr. Twistwords, as you have so dubbed me, cannot penetrate the sliding, sloping moutain of dog-shit that was your reply to one of my more heated remarks. My point is simply this: Long years on the debating circuit should bless me with the patience to listen to a man who thinks he has eight brains and need to let them all speak together at once. But it has not, I 'zone-out,' my masturbating arm twitches to flick the page back up in order to re-read the entire sphere of crapulance you have created, but then.....-sands of time-.....and I just think 'can i be fucked?' I know that either your argument will fail on a basic level, that you are mistaken or the point you make is utter madness. or, you know, prove me wrong by getting one person in here with the patiance and time to follow firstly my powerfully compelling initial argument, your grinding abortion of a rebuttal and then my irreverent couldn't-care-less subsequant responses. And Nin, you frigging lunatic - can't you see that the idea is that you took the piss out of someone with a real grievence with utter delight, now when people rip it outta your flaming red-hot ass you cry 'noes!' and try to push it all in again. Yeah, he vandalized the artical with his measured remarks and was destroyed for his crimes - so what? I don't care, you don't care, but therefore we should neither care if you are mildly ridiculed for simply being a little self-inflated with your own crazy little opinions about some article on ipods, rite? s'funny! not harming anyone, so stfu and remove your hands from your anus - the shit has already been extracted, sir!!!! XD And why did i get called David earlier? And surely I win because I said so first? neither of you guys are straight, are you?--82.44.21.151 02:02, 31 July 2006 (UTC)
- Erm, pardon me for interjecting, but... S. U., Nin... why on earth are you even humoring this person? Really, all it's doing is adding useless length to an already uselessly long talk page... oh wait, now I'm guilty too. Damn. Well, I guess, in summation, D.N.F.T. Oh, and Mr. IP Number, where can I get some of what you're smoking? --The King In Yellow (Talk to the Dalek.) 19:27, 31 July 2006 (UTC)
- Why am I humoring this person...? That's a fair question, I suppose. I thought he was someone familiar, based on the geolocation of his IP address, his use of Australian invective (despite that location), and his intimate familiarity with Uncyclopedia backstory going back over, like, nine months... (And also his quickness to hurl such invective at me, of course.) But I'm now about 60-70 percent certain I was wrong initially, and that this is actually someone from Encyclopaedia Dramatica. My knowledge of that site is limited, but I suspect this could be "User:Samsara" or even LJDrama himself, even. (Woo hoo!) The point of the exercise seems to be to get revenge on Nintendorulez for "trolling" ED, i.e., by going over there and daring to suggest that Uncyclopedia would win some sort of objectively-judged contest comparing the ten "best" articles on each site, and no doubt a variety of other endlessly fascinating ideas. Either way, I'm through with him. After all, I already won, what, three entries ago? c • > • cunwapquc? 19:47, 31 July 2006 (UTC)
- Hard for me to say, honestly, as the overall coherency of the postings dropped well below the watermark set by M. P.'s "Silliest Sketch Ever" in 1970. (That might warrant an award.) But, seeing as you emerged relatively lucid from this ordeal, I'd say the judging panel would have to default victory to you, yes. Well done, Mr. Clean-Air System! As to the identity of this strange fruit, I'm at a total loss, but you have my sympathies nonetheless as you seem to already be accquainted with "persons" of this particular ilk. --The King In Yellow (Talk to the Dalek.) 19:59, 31 July 2006 (UTC)
edit I just thought of something.
I've always found this article incredibly stupid, and it ought to have been deleted in the first place due to being an unfunny one-liner, but there's one more reason I just noticed: It's vanity. I think everyone but the article creator never heard of the site before reading this article. All this article does is give them free advertising. If you really want to feature some unfunny stub, this one shouldn't be it. In fact, I think the reason why rcmurphy bypassed the democratic voting process to nominate this is so that he could obtain referrals for his iPod. Wikipedia does not have an article for this website, so why should we? It's extremely insignificant, and should be labeled as vanity and deleted. --64.9.10.166 19:48, 6 Jan 2006 (UTC)
- 1. We're not Wikipedia, we are a parody site that is loosely based on Wikipedia. If you read Wikipedia so much, you might already know about their article about Uncyclopedia, where they state that we are more of a parody of ourselves then anything else. I fail to see why you can use the "Wikipedia doesn’t have it" argument, when we also have hundreds of funny articles that the Wiki would never even consider adding.
- 2. The free advertising bit doesn’t work either jack, at least not generally. You see there is a reason why they call it Euroipods; it’s only available in Europe. If you note the disclaimer in the article it mentions a number of places that do not carry the ability to receive a Euroipod, it is completely true. While they do bring in slightly more revenue with our contribution, it is stunted by the fact that you can't get one unless you live in Europe.
- 3. "If you really want to feature some unfunny stub, this one shouldn't be it." Okay, I'll address this one too. I'm sure that if anything is featured, it is funny enough to someone to be so. I'm sure that even though it got vetoed allot, it seems to me like your all fighting a loosing battle. Consider the fact that this might be for half a second <dramatic pause> funny. Okay now picture that your not alone in thinking this. Okay now you know how some of us feel, you can now go back to your previous disposition. I don't want this to become a habit, this article has comedy, but it is comedy that is funny the first time. The second time around, it's just stupid, so I agree that in the future featuring articles like Euroipods should not be tolerated, as it steles from what little visible comedy that Euroipods has.--MrJimmy Journal talk 00:44, 7 Jan 2006 (UTC)
- Consider the fact that this might be for half a second <dramatic pause> not funny. Okay now picture that your not alone in thinking this. Okay now you know how some of us feel, you can now go back to your previous disposition. And you didn't address the point of whether or not it's vanity. It's an insignificant little website that's not important enought to have an article. --72.21.41.138 03:06, 7 Jan 2006 (UTC)
- It IS funny , to somone, me, RC, and everyone on the left side of that poster up there. I diden't need to adress vanity, your just pulling out the small point that I diden't cover as a way to spite me. Observe, this is funny, it's a blatent ad, and it's funny. Now would you all please FYAD.--MrJimmy Journal talk 05:23, 7 Jan 2006 (UTC)
- Your reasoning sounds a lot like someone we know. Go home NR, we know you know how to use proxies. Do something else, like buy a BB gun and shoot some cans to get out your frustration.-- 03:13, 7 Jan 2006 (UTC)
- I resent the implication that I created this uproar purely as an advertising gimmick. --—rc (t) 03:11, 7 Jan 2006 (UTC) Get your Euroipod!
- 4. Its a senless accusation that you made when you assumend that RC featured it for his own benifit. And besides if he had featured it to get more signups, how would he benifit from it at all? He woulden't. And even further, if it were possible, and he had exploited the Uncyclopedia for this purpose, woulden't that make the article more funny, or at least more historically valuable?
- 5. Lastly, I went and read the vanity page rules, and I agree with you that Euroipods fits the criteria in a way, but not the way that was intended for a vanity page. Sure it's only funny to a handfull of us, but firstly when I came across the page I diden't like it either, and I have since found the comedy within it. A vanity page is an article that only a handful of users would understand AS A RESULT OF the access that those users have to the topic of the page (ie. a specific person, or game based clan). Euroipods differs for this in the way that everyone has access to the resource that spawned the page, everyone has the ability to understand, as well as think it's funny. If it smells, looks, feels and tastes like a vanity page, it must be. And it is, but just because you don't understand it dosen't mean we havn't provided you with the means to. Secondly this page is still valid, it's funny (to somone), if you really don't like it then go away and stop runing it for those of us who do.--MrJimmy Journal talk 22:28, 8 Jan 2006 (UTC)
- If we all have access to the source, why don't you tell us exactly where the source is so we can decide if it's funny? I've gone to euroipods.com, but I haven't seen anything that makes this page seem funny. Was there a specific part of it that I missed? --205.188.116.130 21:20, 28 January 2006 (UTC)
- So in order to not be vanity, the jokes have to make sense anyway? Um, sorry, because I STILL have yet to understand the joke. --[[User:Nintendorulez|Nintendorulez | talk]] 19:22, 29 January 2006 (UTC)
edit This is what, the FIFTH time you feautured this damned thing??!?!
To whoever violated consensus and featured an article that already was featured, FUCK YOU. --[[User:Nintendorulez|Nintendorulez | talk]] 19:20, 29 January 2006 (UTC)
- What? --Sir Volte KUN Talk (+S NS CM Bur. VFP VFH) 19:22, 29 January 2006 (UTC)
- *does a double take* I swear to god, five minutes ago I saw Euroipods as the featured article, and all four "previously featured" articles were also euroipods. Another admin must've reverted it, or the same admin reverted it and is fucking with my mind. --[[User:Nintendorulez|Nintendorulez | talk]] 19:39, 29 January 2006 (UTC)
Wait, I figured it out. From looking at the main page source:
<choose><option weight=99>{{Featuredarticle}}</option><option>{{:Main Page}}</option></choose>
That means there's a 1 in 100 chance of seeing:
Bastards. --[[User:Nintendorulez|Nintendorulez | talk]] 19:44, 29 January 2006 (UTC)
Okay, Algorithm did it --[[User:Nintendorulez|Nintendorulez | talk]] 19:47, 29 January 2006 (UTC)
Damn, and Algorithm is nominated for Uncyclopedian of the month. Quick, everyone go vote against him.
edit Nominate for featured?
I'm actually refering to this talk page. - 01:12, 18 March 2006 (UTC)
edit Wikipedia has an article about euroipods, but soon won't
PWNED!
It's been deleted from WP. I'd change the template meself, but, uh... well, we know what happened last time I tried to take trivial matters into my own hands... --[[User:Nintendorulez|Nintendorulez | talk]] 22:38, 8 February 2006 (UTC)
- Ahh, gotta love that "PWNED" picture that's been added to the article.
- How do you like this one? --—rc (t) 03:34, 14 February 2006 (UTC)
- Quite good actually. Probably fake though.
- Oh yeah? Pwned! --—rc (t) 01:00, 16 February 2006 (UTC)
- They should go back to doing that. How did this survive all the smart people trying to get rid of it anyway?
- Never underestimate RC's stubborness. —Hinoa KUN (talk) 17:36, 21 April 2006 (UTC)
- Well said, Hinoa. --User:Nintendorulez 18:50, 27 April 2006 (UTC)
edit People are missing the point
This is funny because it's funny that n00bs really think they can get free iPods. I mean, come on that's pathetically funny!
--RudolfRadna 16:01, 27 April 2006 (UTC)
- I get the joke. I also get that it sucks ass and belongs on ED more than Uncyclopedia.--Emmzee 15:59, 24 July 2006 (UTC)
edit Ripped Off
Just in case any of you felt ripped off (which I very much doubt), you can always go and see the real website. Just a point because nobody has noted that there was actually a website and it wasn't (completely) a joke. ~ 17:03, 21 April 2006 (UTC)
- Everyone has noted that it's a real website --LinkTheGameFreak Bitch here17:10, 21 April 2006 (UTC)
- ^ What he said. But it isn't a notable website. VANITY. --User:Nintendorulez 20:27, 21 April 2006 (UTC)
- Meh. Who cares? ~ 12:19, 22 April 2006 (UTC)
- Hmm... *reads talk page* Evidently a lot of people do. --User:Nintendorulez 18:17, 22 April 2006 (UTC)
- *also reads talk page* No, they just care about the article. ~ 10:56, 23 April 2006 (UTC)
- Oh, I almost forgot, nobody cares is another unfunny article. thanks for reminding me.
- Are you starting it AGAIN!? Meh. Nobody cares. --Timmytiptoe my talk 06:25, 28 July 2006 (UTC)
If there aren't any free electronics, it's a rip off, man. just because a person is new to a website doesn't mean the person IS a N00B. Now I find out after reading this whole article there aren't any E-Pods!!?? No electronics!!?? I feel so dirty and so used. It's the holidays. What a way to kick a man in his bawls.
edit ATTN: Emperor
Put your fucking clothes on, I can see your wing-wong. --User:Nintendorulez 18:49, 27 April 2006 (UTC)
- OH MY FUCKING GOSH, Y'ALL ARE STILL FIGHTING OVER THIS BULLSHIT!?! Ok, Euroipods was Featured 2 weeks before CHRISTMAS, its the day after CINCO DE MAYO. Let it go. Just let it go. It has been featured and never will be again. Move on. I have a suggestion: Fisher Price. --Anidn Needs Ignorant Dumbasses to Not Meaninglessly Exile him from InterNet MeetingrOoms MUN SLACK10 ⌘ .az VforIRCop(HOLLA ATCHA BOY!) 18:20, 6 May 2006 (UTC)
- Never! Besides, I had to make the joke... --User:Nintendorulez 21:22, 6 May 2006 (UTC)
- I agree with Nintendorulez. This page isn't funny at all. And if it is true that unfunny is funny, then why the hell does anything get deleted? That is backwards to the entire concept of Uncyclopedia.
- Also...it's ugly as hell too. I would even accept its unfunniness if it didn't look like the pile of crap it is. I mean, SERIOUSLY! Fix it! Flameviper12 20:49, 17 May 2006 (UTC)
- That's mostly been my point. If we feature this, then I imagine some contributors will whine about their crap articles being deleted and say "but u fetured teh yooroipds adn taht was nowher neer as teh funnay as mine article abowt tihs guy at skool naemd joe who is teh gayz0rz so y u deleet my article?" If we keep this one-line article, we'll have to keep them all. And continue featuring them. And who wants that? --User:Nintendorulez 01:02, 18 July 2006 (UTC)
- I do. because it pisses you)}" > 14:36, 25 July 2006 (UTC)
edit An excerpt by Flameviper12 05:13, 21 May 2006 (UTC)
- "This article is hilarious!" said
O'Brienthe admin. "No it isn't!" said WinstonFlameviper. The admin sent a shock through Flameviper's spine with the dial. He held up a printed copy of Euroipods to Flameviper's face and said, "Is this funny?" Flameviper adamantly replied "No, it sucks!". The admin held up the page again and said, "If the Partythe Cabal says it's funny, than is it?" Flameviper still said "NO!" and another wave of pain flowed through him...
- from 1984
- Ignorance is Strength. Who personally should care about this crap pile? -. 21:29, 28 May 2006 (UTC)
- SHUT THE HELL UP AND FIND ANOTHER TOPIC TO FLAME)}" > 19:39, 7 June 2006 (UTC)
edit l'empereur n'a pas des vêtements
I sort-of get the article itself, but I don't get the phrase "The emperor has no clothes" that is used. Can someone explain? ~ 13:57, 8 July 2006 (UTC)
- It refers to willful ignorance of something obvious - in this case, that Euroipods isn't funny in itself. Nin likes using the phrase (repeatedly) because he thinks it's clever, which it isn't. —rc (t) 16:25, 8 July 2006 (UTC)
- Well, that's obvious (that it isn't clever, that is). ~ 16:26, 8 July 2006 (UTC)
- It makes sense to me. Crazyswordsman 02:00, 21 July 2006 (UTC)
- You know what would be funny, methinks? If we made UnBooks:The Emperor has no Clothes as another spoof of this. Crazyswordsman 00:02, 22 July 2006 (UTC)
- Which could also be based of a parody of whatshisname's book called "The Emperor's New Clothes" or something like that. ~ 11:43, 30 July 2006 (UTC)
- No shit, Sherlock. Crazyswordsman 01:38, 7 August 2006 (UTC)
What you're all missing is that it should be "n'a pas de vêtements". Bloody Yank monoglots. EamonnPKeane 20:07, 11 December 2007 (UTC)
edit Euroipods is the worst
What kind of shit is this? You fuckers are worse than ED. Junky junky junk! Oh yeah, go here.--71.70.221.221 02:53, 23 July 2006 (UTC)
- For once, I'll agree with an IP. This is the kind of junky junky junk that really shouldn't be here.--Emmzee 18:58, 23 July 2006 (UTC)
edit Would someone please remove the reference to me already?
"This company purchases iPods from the Apple Store directly.... as well as boosting sales from iTunes... Apple are more than happy to have this company give out free iPods, I received one myself - it worked a treat! Flogged in on [[EBay|eBay]] mind you, for 200 quid! No complaints from me!" - '''<span id="Nintendorulez">Anais Nin</span>, USA'''
Not funny, and I seriously don't want my name on this shit page. --User:Nintendorulez 22:42, 12 October 2006 (UTC)
- Hey, Nin, get over it. Clearly the reference to you is a sign that we love you. People refer to me negatively all the time and I do not get all bent out of shape about it. Loosen up and find something else to whine over... please??? -)}" > 17:38, 13 October 2006 (UTC)
- Yeah, you're clearly only going to encourage more people to take the piss if you get uppity about it. If this is the worst thing that happens to you this year, be grateful. But quit the tedious whining. It's not like anyone's taking your mother's name in vain or anything. Learn to laugh at yourself, it's one of the most important lessons in life. -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 17:51, 13 October 2006 (UTC)
- Oh, I'm definitely writing my "Emperor has no clothes" UnBook this weekend. Crazyswordsman...With SAVINGS!!!! (T/C) 20:03, 13 October 2006 (UTC)
- I'd just like to point out that in five days, this article will be one year old! Happy Birthday, Euroipods! But somebody should still get rid of the "Nin" reference, in my opinion. If someone did, though, what would happen? Insta-ban? Revert-and-a-warning? Or would it be allowed to stand, assuming it was replaced with something that was really funny? I'm just curious. c • > • cunwapquc? 05:40, 14 October 2006 (UTC)
- Hey, I don't mind being referenced on funny articles. But not this. And I'm not even an admin. I daresay I'm a non-notable user that Codeine's Mum hasn't heard of, and I don't want to be mocked in a mainspace article. Especially an article the admins decree I'm not allowed to edit, so it's not like I can make it less insulting. --User:Nintendorulez 20:18, 14 October 2006 (UTC)
- Well, then... Maybe I'll work on a replacement for the entire section, post it on the one-year anniversary as a "birthday present," and just see what happens. If I get banned then so be it, but at least I'll have acted on the basis of my convictions, and all that rubbish. Still, it would be nice to know what might happen in advance, just for old times' sake! c • > • cunwapquc? 21:02, 14 October 2006 (UTC)
Holy shit we are back on this again!!! I had to double check the timestamps to make sure I wasn't dreaming, maybe, nin, just maybe if you don't whine about this for a whole year the admins might consider removing it, but bringing up the same compalint every few months really isn't getting you anywhere FFS!!!--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 23:41, 14 October 2006 (UTC)
- But how is he to know that the reaction wouldn't be something like, "You haven't complained about this for a whole year, and now you decide you're offended by it? What's wrong with you?" Besides, wikis aren't monolithic group identities operating in lockstep, they're diverse communities of people who all have (potentially) a wide variety of opinions. That includes admins - one admin might say it's perfectly OK to replace the "Testimonials" section, and another might come along after the fact and ban the person who replaces it for 6 months, 6 years, or 6 millenia. At Wikipedia, they even have a policy page on it called "No Binding Agreements" or something of that sort. It sucks, and it's one of the big reasons why problems occur between admins and non-admins. The best solution is for admins to simply not do things like this, no matter how justified they are... But maybe that's too much to ask. c • > • cunwapquc? 04:45, 15 October 2006 (UTC)
- That's a perfect summary of how the admins would respond. Admit it Elvis, either way you'd be flaming me. --User:Nintendorulez 22:42, 29 November 2006 (UTC)
Very funny, Elvis. Fuck off. --User:Nintendorulez 22:24, 16 October 2006 (UTC)
- Thank You, Thank You, I'm available for childresn parties, etc., etc.--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 23:39, 16 October 2006 (UTC)
- Seriously, undo that right now. I WANT MY NAME OFF. Expecially a link to my userpage. --User:Nintendorulez 23:52, 16 October 2006 (UTC)
- Seriously speak to the ain't listening.--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 14:14, 18 October 2006 (UTC)
- Nin, ok, i made it me. Happy now... go get a life. -)}" > 16:58, 18 October 2006 (UTC)
Elvis, I never made any "whinny temper tantrums" as you so call it. I politely requested months ago that my name be removed, and was instead mercilessly flamed for it, and the reference made to more and more obviously point to me. Now it actually links to my userpage. I find it quite unfunny to mock an ordinary user in a mainspace article, without their permission, and on a page that the individual is specifically barred from editing. An admin maybe, since they run the site, but it's just plain mean to mock regular users in the mainspace. Perhaps if it was funny in the least bit, I might not mind. But it's just unfunny, and serves no humorous purpose other than to mock me for disagreeing with the cabal. And I can't even edit the page to make it less insulting. That's really cruel and unfair. All I ask is that my name be taken off. That doesn't seem like too much to ask. I would appreciate a direct response, rather than more flaming, and you'd better not try making the reference even worse. Please, I just want my name removed. --User:Nintendorulez 20:17, 21 October 2006 (UTC)
- Dude, it's unfair and cruel that we have to deal with kakun, but we got over it. Nin, seriously, we would not keep messing with you if you would just let it go. Its an article that hardly anyone reads anyway. -)}" > 04:41, 27 October 2006 (UTC)
- But it doesn't work that way, Anidn. If someone is sticking a needle in your back, you forgive them for it (and/or "let it go") after the needle is removed, not while they're still sticking it in your back. I honestly don't understand why people don't see that, but I guess I'm just a "pathetic whiner," huh? c • > • cunwapquc? 18:45, 27 October 2006 (UTC)
- Exactly. I'm glad to see there's at least one person who can see my point of view on this. Would it hurt in any way to take the reference out? No. But it does hurt to leave it in. Why have it there? --User:Nintendorulez 18:57, 27 October 2006 (UTC)
edit Compromise
Just remove the link, or have it link somewhere else. That way the unsuspecting person will not know that it's Nin, but Uncyclopedian insiders who know the truth would know who it is really referring to. Crazyswordsman...With SAVINGS!!!! (T/C) 06:30, 3 November 2006 (UTC)
- Check the article history, CSM... That's how it was from January 6 all the way until October 15, and if it was interpreted as an attack on a user by admins for that entire nine-month period, why should it be interpreted differently now? c • > • cunwapquc? 06:54, 3 November 2006 (UTC)
- Well, I'm glad it no longer has a link or span tag, but I really wish it just didn't mention me at all. --User:Nintendorulez 19:57, 3 November 2006 (UTC)
- Uhh.. there's no link. Just the word :26, 29 November 2006 (UTC)
- Forum:It appears that Euroipods is STILL causing problems... --Hrodulf 23:29, 29 November 2006 (UTC)
edit What the hell is this crap?
i have been on uncyclopedia more than a year and I just decided to look at this article. This is the biggest piece of shit I've ever seen. And sorry, but this is not unfunny so it's funny either. It's just......what's the word I'm looking for? Oh yeah...moronic. And i just love how because the admins think it's funny and about 99 percent of everyone else hates it, it gets featured. Brilliant. Nor do I care that this is almost a year after featuring. Whoever decided that their opinion was better than everyone else's is a pompous, arrogant, hyper-conservative ass. End rant. 24.94.19.107 09:05, 5 November 2006 (UTC)Whatever
- I agree with you for the most part, but why "hyper-conservative"? I thought that overinflated conceptions of the importance of one's own opinions transcended politics. Meanwhile, the article would probably be funnier if it were pared back to its original state, and then a single image were added just below it... The trick is to find the right image. Then, if you click on that image, you see all the added "spammy" parts. Hmm, maybe I'll start a forum on it! Oh, and you forgot to link everyone else. c • > • cunwapquc? 14:14, 5 November 2006 (UTC)
- I have never thought of this article as that great. It's an in-joke, what can you do? Thankfully, Uncyc has moved on. Or has it? -- Hindleyite 14:17, 5 November 2006 (UTC)
- It would be a lot easier to move on if the "Nin" were removed, as the section just above this one would indicate! Hopefully then we could all just forget the whole thing ever happened... I'll admit that I have certain ulterior motives for taking this position, but I'm hoping to help end all the pointless "drama" and negativity here by doing so, not create more. Overall, though, I just see it as really unfortunate that certain individuals seem to insist on using articles as a punishment technique, and this is a prime example. c • > • cunwapquc? 14:56, 5 November 2006 (UTC)
- Yeah, at this point I just want my name removed... --User:Nintendorulez 20:18, 22 November 2006 (UTC)
- Since Nin's name will never be removed, I added my testmonial, name and link to my userpage right below his. Now he's not alone. I don't know if this helps or makes things worse but I suppose time will tell, and hopefully, it's funny. --Hrodulf 14:09, 26 November 2006 (UTC)
- Pretty much anything other than my name will do here. >_> --User:Nintendorulez 14:38, 26 November 2006 (UTC)
- Yeah. The thing about it is, people don't want it removed because they're enjoying watching you complain about it. Removing it would end their fun. Sorry, but it seems to me like it's unlikely to happen. I added my name so I could look stupid also. Now we're both idiots. --Hrodulf 14:55, 26 November 2006 (UTC)
- Dun dun DAH! I replaced it with "Oscar, UK". How long do you think it'll be before I'm reverted, banned, and murdered in my sleep? You can remove your name too, Hrodulf, if you want. Though personally I think the everything under the "featured" template should be taken out, save for a "see also" section. But that's just me. • Spang • ☃ • talk • 18:13, 26 Nov 2006
- My guess would be about 36 hours, give or take, before User:Elvis or User:Mhaille reverts it. Actually, it might have a better shot at survival if you change it to "Mister Somey, Iowa, USA." That would set the Wikipedia-lovers off like nobody's business, and I certainly don't care one way or the other... Who knows, I might even start writing articles again! Also, I agree - the entire article should be stripped to something resembling its original state. This was one of the reasons I was asking about adding some "NavHeader"-like means of hiding page elements. Ideally, we'd put all of the extraneous stuff into one big table and hide it, with a link saying "Tell me more!" that would toggle the table's visibility on and off. (Ideally, the link itself would also change to "Tell me less!" when you clicked it.) c • > • cunwapquc? 18:29, 26 November 2006 (UTC)
- Later...
- Did I say "36 hours"? I actually meant "21 minutes" — just a little typo there!
- Anyway, back to what Hrodulf wrote earlier. In my opinion, only a small minority of users are "enjoying" watching Nintendorulez complain about this. Unfortunately, all of them are long-time admins. If he stops complaining about it, they'll simply conclude that they've gotten away with it, and nothing will happen. If he continues to complain, they'll conclude that their little jibe is still having an effect, they'll keep it in place "because it's funny," and still nothing will happen. So the situation will continue no matter what Nintendorulez does. In fact, it will never be resolved until one of the aforesaid administrators (preferably one with enough clout to make it stick, or perhaps even the original offender) decides to start acting like an adult and puts the interests of the website above his/her vaguely-conceived need for payback (and don't forget, this was an incident that happened over a year ago.) As I've said many times, I'm not going to hold my breath waiting for this to happen, but who knows? One of them could surprise me. It hasn't happened before, but hey, there's a first time for everything! c • > • cunwapquc? 18:37, 26 November 2006 (UTC)
- Sorry to piss on your bonfire but I have to disagree with your comments, as I don't see how "Nin, USA" harms the interest of the website. Its not like it actually has his name or links to his userpage, as you yourself have done quite a few times. Have you heard me complaining that my userpage is referenced here or here? The only thing linking Nintendorulez to the Euroipod article in any way is this talk page that you seem determined to keep going on and on and on and so on. I have no idea what you are implying by "need for payback", but I can say that for my part its about the amusement value, not any personal vendetta. The negative reaction to this anti-article is clearly far more amusing that the actual content. This ongoing "debate" though)
- How many times have we been over this, Mhaille? The two links you've pointed out are a direct reaction to attacks you and your unidentified friend made on someone whom, incredibly, you still think I am, here, here, here, and here. You have the power to remove those attacks, not me. In fact, you have the power to remove those "references" to you too, so why don't you? Because by removing them, you'd have to admit that the practice of putting usernames in articles where they're not wanted is wrong? And I obviously don't believe for one second that you're keeping the "Nin" reference on this article for the "amusement value," unless you're referring solely to your own personal amusement, since we all know what that usually involves. I'd have to say that most of us aren't amused by this at all anymore - at least in my case, it stopped being funny after about 48 hours, and that was almost 11 months ago now. Most of us wish it would just fucking stop. So, if you want to end the "whining," then start acting responsibly and get rid of the reference. Simple as that. And hey, why not get rid of the attacks on me too, and I'll get rid of the references to you, so that we can all be buddies again? What a strange idea that must seem! But hey, suit yourself... You always have, after all. c • > • cunwapquc? 20:09, 26 November 2006 (UTC)
- Sorry, but I still don't see "attacks" on anyone. Its interesting that you highlight myself and DG (whom I barely have any dealings with so not sure I can call him a friend, unidentified or otherwise) have been your biggest supporters, including during the time when you had your little tiff earlier in the year. As for being buddies I've never had anything against you in the first place (other than the occasional whinging, but hey, who's not guilty of that from time to time). In fact, if you were to speak to me directly we'd probably get on like a house on fire (holds out a burning olive branch). But thats not the issue, is it?
- Who is "most of us", as I can only see the same number of frequent posters here and the usual suspects at that. If "most of us" had wanted it changing, surely it would have already happened. As it is, in its current form the article was voted "Best of Show" in numerous competitions, so I still can't see that many people agree with you that this stopped being funny a year piss on your bonfire but I have to disagree with your comments, as I don't see how removing or changing "Nin, USA" harms the interest of the website. I have yet to hear one argument as to why it should be left there. And if it really isn't meant to reference me, then it wouldn't mean anything to change it, now, would it? --User:Nintendorulez 20:34, 26 November 2006 (UTC)
- Look,
"Guffy"Mhaille, there are basically three ways to go here. You can keep piling on the BS, like you did just now, or you can ban me, or you can just grow up. You're not fooling me, and I doubt that you're fooling anyone else who's been paying attention. Nearly every attack I've seen coming from admins against regular users in main article space has come directly from you. If you want to make things to be all nicey-nice again, then just do it, okay? Otherwise, don't waste my time. You can call it "whinging" if you want, but my time is very valuable, and if I were to bill Uncyclopedia at my usual rate for all the hours I wasted here, totally unsuspectingly, I'd be owed several thousand dollars. And sure, I'd fix it all myself, but oops, sorry - I don't have the access rights! c • > • cunwapquc? 20:46, 26 November 2006 (UTC)
- Firstly I was not responsible for the Guffy account (and I have no idea who was), and secondly I really don't want to waste any more times with your delusions of persecution. I'm more than happy to take a joke in the manner in which it was given. I still can't see how you can judge the examples you gave as "attacks" and then refer to me a Goatse loving paedophile. You can either see the funny side or not, nobody cares.
- Nintendorulez....as people have stated before the "Nin, USA" reference is not aimed at you (there's not even a link), rather its an ironic reference to your response on this very page. There IS a big difference, IMHO. -- believe you when you start taking action, Mhaille, not just because you say I should. I mean, do you deny being User:GOD!? Do you deny adding the "Nin" reference to Euroipods? Couldn't I be forgiven for thinking that creating sock puppet accounts to go after non-admin users is your personal M.O.? And who said anything about "persecution," anyway? I'm not the one being persecuted - the person being "persecuted" is a college student who, according to him at least, has made exactly one edit on this website during its entire existence. I'm a 45-year-old professional developer who's made about 2,000 edits here, and none on Wikipedia, which is really what this whole stupid issue is "about." (And there IS a big difference, IMHO!) The four articles I linked to above are clear violations of Uncyclopedia's vanity policies, by practically any definition you can come up with. So why are they still there? And if you want the offending references to yourself removed, remove 'em. I don't care one whit. I'd even do it for you, if I thought there was any chance whatsoever that you'd let me do the same on my own (and his) behalf, and end this whole sad, stupid affair here and now. c • > • cunwapquc? 21:46, 26 November 2006 (UTC)
- I'm not taking any action. End of. GOD! isn't under my control, I have only one other account that I edit with and you won't have noticed anything that account has created (I use it to write things without the "Mhaille" baggage). So no....its not my MO at all. I'm a 37 year old professional designer/developer who has made who knows how many edits here and exactly two on Wikipedia (both of them last May), so again I don't know exactly how you think I am linked to this "whole stupid issue".
- Just for the record I was actually speaking with Nintendorulez on IRC at the time I added the Nin reference. Even amidst all the ranting over the original Euroipods article he's still a funny and intelligent guy, who I like "speaking" to. So no, I don't deny it, and I've explained why its there. If you'd care to ask him I also tried to calm the whole situation down when he was threatened with a long extended ban over the original article too.
- The point I'm making is that because you can't see the humour (no matter how childish) in something doesn't mean that it has no value. As an Admin I have to leave dozens, if not hundreds, of crappy, bitter little childish articles, with "humour" that covers many different genres (and some that you can't categorise). Why? Because there's a seed of humour in them. Because the person who submits that might go on to write something brilliant. Because they may learn from some of the great writers we have here (yourself included) and become better for it...(click tape for stirring music}...over the last year as an Admin I've seen some brilliant writers and photoshoppers evolve, and I'm grateful for any small part that I have had in helping any of them. Tread lightly, for you tread on my dreams....(sound of tape ending)
- As I've said before you can either believe me or not, me no care. I'm a busy)
- Right, so the "nin" is an ironic reference to nintendorulez' response on this page? In that case I've thought of something much funnier - changing the "nin" to "Mhaille", as an ironic reference to this discussion about whether or not there should be a name there as an ironic reference to this talk page. Seeing as nintendorulez wants the name removed, and Mhaille wants the name kept, the fact that it'd be name would increase the irony by a lot (seeing as there isn't much in the reference in the first place). Think of it as meta-irony, or as I like to call it, EXTREME irony. Os as an in-joke within an in-joke. I'll go ahead and change it shortly, I don't see any downsides to that change. Correct me if I'm wrong. • Spang • ☃ • talk • 04:50, 28 Nov 2006
- Brilliant idea, Spang. Let's see how Mhaille likes it. And a reference to "my response" is still a reference to me, either way. Don't try and reword it to mask it from what it is. You aren't Fox News. --User:Nintendorulez 22:46, 29 November 2006 (UTC)
- In response to you earlier statement: If you would like to experiment with NavHeaders or whatever the hell they are, I would greatly encourage it, perhaps on a seperate page in you userspace. If you could get it to where it looked pretty good you could post something at the forums and perhaps even get the article changed. I think it'd look alot better if you didn't have to see all of that other crap constantly - especially that big yellow box, talk about:40, 26 November 2006 (UTC)
- <damn edit conflicts!> Ooh, 21 minutes! Incredible! Yeah, and I implemented the NavFrame thing earlier today already, you're so behind the times! Though I think the link can only say [show] or [hide]. So it'd have to be "tell me more! [show]". So... get to work on that, I've no idea how that NavHeader thing-a-ma-bob works. <goes to take name out again> I've never been in a revert war before, I hope I'm doing it right! • Spang • ☃ • talk • 18:45, 26 Nov 2006
- You're doing just fine... I try to avoid revert wars myself, but it seems like avoiding them doesn't do much good. They'll still call you a "whiner" no matter what, apparently. But anyway, thanks! I haven't been spending much time around here lately, so I hadn't noticed that you'd done that NavHeader thingy. I can't say when I'll get around to it though, because I have a couple of recertifications I've gotta do ASAP... but there's no reason why that couldn't be my next little project. c • > • cunwapquc? 18:56, 26 November 2006 (UTC)</s>
edit This has turned out to be a lot more vicious than I expected
Guys, I added my name to the article to show that it isn't really that terrible a thing. There are plenty of real things to be upset about, this is the most frivolous battle over nothing I've ever seen in my entire life, except for that time on IRC where people had a huge fight over someone using the letter "Q" where it didn't belong. And that only wins because of the technicality that it involved only one letter instead of one word.
Euroipods would probably be a long forgotten article if not for this ongoing vendetta over nothing. I tried to take Nin's name out before since he was upset about it and I think we have to be pragmatic, right or wrong, it's staying. I'm personally indifferent and as shown by my actions lean slightly towards removing it. But I'm not the king of Uncyclopedia, which I think you'll all agree is a very good thing.
Anyway, now that I'm in there also maybe some of the attention will be off of Nin and on me which will make this less of a humiliating experience for him. I've tried to help on this issue before, since the name isn't coming out I decided this was really the only thing I could do about it. Let's resolve this issue and move on.
And if you're still upset, Nin, make a spoof Microsoft Zune article (Eurozunes?) and put someone else's name all over it. You do have choices you perhaps havn't considered, you know. --Hrodulf 22:57, 26 November 2006 (UTC)
- Eurozunes!, 26 November 2006 (UTC)
edit A is NOT "money"?!
Here is the absolute original revision:
- "A website giving away free ipods in return for
- a) completing offers
- b) reffering freinds to do the same"
Forsooth! In reality, the "money" point was added in THIS revision, some three months later!
I feel very compelled to revert this hideous act of anti-vandalism and restore it to its original, pristine state. --L 16:25, 28 November 2006 (UTC)
edit Testimonials . . . . . dot
I am extremely upset that my name has been removed form the eyroipds arttcle. Seriously, undo that right now. I WANT MY NAME ON. Expecially a link to my userpage. --Hrodulf 21:50, 28 November 2006 (UTC)
- Wow, that was a quick response. I guess my euroipods crusade was a lot shorter than Nin's. Oh well, back to writing too many UnNews articles and such. --Hrodulf 22:36, 28 November 2006 (UTC)
- Just for the record, I would have reverted it without you - or anyone else for that matter - asking. However, I'm glad I could help a guy:44, 28 November 2006 (UTC)
- Children of Hrodulf isn't much of a crusade, Some user, since it appears you shot yourself in the foot big time by writing that. If you want to attack me for being an insensitive asshole to Nin's plight (hey, I'm a New Yorker, of course I'm an insensitive asshole, it's my nature) maybe writing an article criticizing me for it whose content is, um, lets see, approximately 2000 times infinity more offensive than anything in euroipods is somewhat of a self-defeating hypocritical posture for a debate of this tiny magnitude.
- Of course, I'm not offended by it because I couldn't imagine anything smaller, pettier or more ridiculous than starting a flame war over the opinion of me held by someone who would respond in this manner to my decision to parody the holy war over euroipods as I have. I find the whole incident funny more than anything else, and this only increases the humor content for me.
- Hold on, now. Do you or do you not want a crusade here? You can't just say the crusade is over before it even starts. Crusades cost money, take a lot of time to arrange, people have to take time out from their busy schedules, warriors have to get paid, caterers have to order food in advance, and so on. Even if you don't like the crusade you've started, you have to go through with it, no matter what! I mean, at least get on the boat, wave the flag around, maybe scale the walls of Jerusalem a couple times... Then, sure, you can go home, get a sandwich, take a shower, maybe marry the Queen. But not before! I mean, there haven't even been any casualties yet or anything. c • > • cunwapquc? 02:26, 30 November 2006 (UTC)
- It's not my fault you self-destructed. You can say whatever you want, but I don't believe in wrestling in the mud with a pig, which is what you are. You both get dirty, and the pig (you) likes it. No thanks. Find someone else to play with. Perhaps you can try looking on Encyclopedia Dramatica to find another little drama queen to have your cat fight with. --Hrodulf 02:32, 30 November 2006 (UTC)
- Hrodulf, I wouldn't worry about it too much, you haven't really "made it" until someone writes a scathing commentary on your opinions, shortfalls as a human being or just plain points out that you are a "total fag". Its a bit like hazing, but with marginly less spanking. Welcome to the)
- Considering I linked Children of Hrodulf from my userpage, saved its content on a subpage to my userpage, and addressed it here, in the article talk page and in the Benson forum (where I also flamed myself for good measure, see Forum:A FORUM IS NOT ENOUGH BENSON DESERVES A NAMESPACE, I don't think my worrying about it is an issue. Hey, I got GotM nominated my first month here if I recall correctly. If I was sensitive to this sort of thing, I'd have been long gone. --Hrodulf 10:51, 29 November 2006 (UTC)
- Mhaille debags Hrodulf and spanks him for Ewoks singing a bit of the old Ludwig Van, my brothers....I was cured hate you, Hrodulf, no offense. But now that you've been around for awhile and gotten the hang of things, you're a pretty cool:19, 29 November 2006 (UTC)
- Practically on my first day here I got into an argument with Isra over NRV, and I did get somewhat personally invested, although I don't think it reached anything I'd describe as hatred, more like frustration, perhaps. I now have a huge amount of respect for Isra and his opinion carries a great deal of weight with me.
- I do seem to recall some tension between yourself and I, although to be honest, the cause of it eludes me at the moment, I'm assuming it was some sort of editing thing, I honestly just don't remember. What I tried to do was to use tension for a positive purpose, instead of arguing over deletions, I tried to develop information I learned, like the userpage subpage method of article creation, and communicate it to other users, to find a way to defuse tension and make the article creation process work better for everyone.
- I've tried to be fair and objective during my time here and to also not let myself be controlled or intimidated by anyone. It's been a fine line, but I've tried my best to walk it. I think it's probably inevitable for tension to develop in a group project like uncyclopedia, but the trick is to find a creative way to resolve the tension, rather than letting it destroy everything you're trying to accomplish.
- Yeah, I guess it helps that
I take alot of drugsI've learned to live with people having different opinions then myself. Also, I believe the cause had something to do with... something. I don't know, it was in the ban message, but I'm too lazy to go:16, 30 November 2006 (UTC)
- It was because you huffed a stub back in April that I was trying to expand while I was working on it instead of tagging it, which led directly to the forum discussion that led to the development of the how to get started editing article, so all in all it was a productive bit of interuser tension, I'd venture to say. No big deal, really. Just ordinary stuff that goes on here every day. --Hrodulf 01:55, 30 November 2006 (UTC)
- Sorry, but I'm not here to entertain you or your sock puppets. The comedy is finished. You'll have to go to encyclopedia dramatica if you want more, go ahead, you'll fit in better there. Don't come back either. --Hrodulf 06:28, 30 November 2006 (UTC)
- My sockpuppet has had nothing to do with this conversation! And I seriously can't tell if you're joking or not. In all fairness, you did ask for a crusade, and you have seen what one of those involves. When someone gets worked up over something they asked for, and does so in a manner entirely fitting of the infamous talk page they do it on, you have to wonder if it's not altogether serious... • Spang • ☃ • talk • 06:40, 30 Nov 2006
- I really should use my powers of predition to make shed loads of money or something.--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 14:29, 30 November 2006 (UTC)
- Y'know, User:Elvis, I absolutely agree. Your ability to predict the tendency of users like me to, as you've so eloquently put it, "winhge" when someone with admin powers who really should know better prolongs a useless, unending, disgraceful flame-war in flagrant defiance of common sense and human decency really is utterly amazing! Why, it's vastly more impressive than my own all-too-meager ability to predict the likelihood that someone who hasn't written an actual article for the website in an entire year, if ever, would one day blatantly spork Wikipedia four times in succession and change the original versions merely by swapping a few names and putting the letters "Un" in front of a few words, all in the apparent pursuit of his own self-glorification by comparing his position as one of several dozen obscure, anonymous website administrators to that of a member of the British Parliament! I mean, it's like you're some sort of super-duper prophet or something, only with below-average spelling ability and no appreciable talent for written humor! Bravo, Elvis! Well done! In fact, you get a cookie! Enjoy! c • > • cunwapquc? 03:37, 1 December 2006 (UTC)
- Oh no you found my dirty little secret, you know it helps if your bitting come backs don't consist of things that the intended victim not only freely admits but takes a perverse pride in, the fact that you actualy seem to think that I take any of this stuff seriously actualy says a lot more about your ego than mine, but I guess that must be the difference betwen "having an appreciable talent for written humour" and actual having a sense of humour.--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 13:47, 1 December 2006 (UTC)
- Sorry, I see no need to explain myself in the slightest. --Hrodulf 08:48, 30 November 2006 (UTC)
edit Happy one year anniversary Euroipods!
And thanks for bringing us a years worth of pointless debate that gets nowhere! Crazyswordsman...With SAVINGS!!!! (T/C) 07:29, 30 November 2006 (UTC)
- And no reskin? What a down:10, 30 November 2006 (UTC)
- I'm sad about the lack of a reskin. The new table thing rocks and makes the article look great. Happy Euroipods Day!!! --Uncyclon - Do we still link to BENSON? 00:43, 1 December 2006 (UTC)
edit This is the 9th longest page on all of Uncyclopedia
Special:Longpages doesn't show talk pages, but if it did this page would be #9. Longer than You have two cows even. - Nonymous 02:51, 14 December 2006 (UTC)
= I don't really have an opinion abour EuroIpods, I just wanted to say 'hi' to everyone who knows me. 86.6.207.111 22:48, 8 February 2007 (UTC)
edit If you liked this article...
...then check out Contents! -- 16:29, 10 February 2007 (UTC)
- You're not article whoring, are you? Because that page is aleady in your sig. =P --AAA! (AAAA) 08:27, 18 February 2007 (UTC)
edit New Image?
I know this is probably blasphemy or something but maybe could we change the image of the euroipod to this: --Uncyclon - Do we still link to BENSON? 09:18, 18 February 2007 (UTC)
- I really like that. lets see if we can do something with it.--
» >ZEROTROUSERS!!! EAT ME!!!! CRAZY PERSON! SMELLY!!! ILLOGIC, BEHOLD!!!!~» 10:27, 22 February 2007 (UTC)
edit Sweet Jesus! Euroipods is no more!
The website now redirects to "Eurogiveaway" which is crappier that Euroipods but has more stuff. This is the work of Oprah!
edit History
Can someone please explain to me why this page is so often referenced and why most of it is hidden at first? -Unknownwarrior33
edit Dead-horse-beating time!
All right, I'll try to keep my points as simple as I can:
- This article is hilarious (only with the "hidden" bits, however).
- In my opinion, it inherently deserved to be featured.
- Howeber, unless Nintendorulez has totally misrepresented the vote, it appears that this article's becoming featured was a dictatorial violation of absurd proportions.
- Therefore, this article should not, at the moment, be featured.
- However, there is a way out of this, which I humbly propose: an extended (meaning at least, I don't know, thirty votes) , and well-publicized (so it attracts a broad spectrum of both lovers and haters) "recall vote". Those of us who feel that the article is brilliant should have nothing to fear — humor should be appreciable by the masses. Otherwise, there is absolutely nothing to philosophically separate the sysops from the countless users who get banned for creating nothing but crap, then complain about fascism ("Oh, you people just don't understand funny"). Conversely, of course, neither should those who disagree with the status worry about such a vote.
- In any case, I am confident that it would at least garner a "Quasi" from such a vote, but its continued fame and infamy may well be enough to propel it all the way to "re-featured", especially if enough of the original haters change their minds. Of course, the real problem is that a lot, maybe even most, of the people pissed at Rcmurphy will be unable to put a "for" vote without thinking that they are supporting the original vote result as well, which I don't believe they would. So when voting, please don't let it get personal in any way.
- No, of course I don't think the article needs to appear on the main page again. But, contrary to such claims as "less than a day of featurization" and " It has been featured and never will be again", "featured" doesn't just last for a day. Featured is forever. (Someone should tell DeBeers). A re-examination of this whole riduculous (sometimes in funny and sometimes in ugly ways) thing is quite apparently needed for the sake of closure, plus our impression to the world beyond Uncyc, and general good will among the community.
Thoughts?— Lenoxus 08:39, 17 March 2007 (UTC)
- Bad idea...? Maybe I don't understand what you're suggesting, but it sounds like you want some sort of formal referendum on whether the article, in its current state, is really feature-worthy. I doubt such an exercise would be worth the fuss, but assuming this were to actually take place, would we be able to discount the votes coming from people who have been personally involved over the last 18 months? Those votes and opinions shouldn't really be considered objective, including my own, assuming I were to ever vote (or opine) on anything around here again. Either way, something like that wouldn't produce "closure" - it would probably just produce more arguing, and as the Wikipedians would say, consensus can always change. But more importantly, while mysteries sometimes call for "closure" in the form of a significant event (i.e., revealment of the solution), fiascoes mostly just require people to forget they ever happened. c • > • cunwapquc? 05:31, 18 March 2007 (UTC)
- I definitely see what you're getting at. I suppose, to put it another way, I'm vaguely paranoid about the whole VFH process, and how it appears that there was and continues to be absolutely no way of stopping the sort of thing Rcmurphy did -- which is not so much a personal objection against Rc as a general one against the system. I can't help but wonder, for what next article will a sysop think "Wow, it would be amusing to unilaterally feature this"? Also, why was Rc (as far as I can tell from this page and Rc's talkpage archives) not chastised by any other admins for this action? These are the sort of questions I'm still trying to sort through. If someone showed the policy or guideline that will prevent this from happening again, I would rest more easily, and would be wiling to allow this to be "that one fiasco." Oh, and as far as self-interested votes go -- um, I wouldn't call them irrelevent by any shot, given that you can even nominate your own article. But the more hypothetical votes from people like me (who never contributed), the better. -- Lenoxus 17:28, 18 March 2007 (UTC)
- Sure, I wasn't contributing to the site when this was all going on, but about the "next article a sysop will unilaterally feature". I really can't see this happening again in my opinion. Just because of the fact that it would trigger something as big as this off again. So I'm 99% sure that that won't happen again. As for the re vote, I doubt that it would do any good. I can't really say why, I just don't think it will solve anything. Kudos for trying to find a way to solve it, but I think the whole issue is in the past. —Braydie 17:51, 18 March 2007 (UTC)
- It also would never happen again, because it's already been done, and it wouldn't be nearly as funny the second time:22, 18 March 2007 (UTC)
- Hmm, that's the kind of humor I'm just not groking — why it was even funny to feature it like that the first time around. I would only get that if the topic itself had to do with unilateral featuring in some appropriate way, like "Freedom of Expression." Or if instead of being somewhere between decent and brilliant, the article was so monumentally bad that the audacity would be that much greater, like Fisher Price. But I'll agree to disagree on that. — Lenoxus 23:27, 18 March 2007 (UTC)
edit Right now, I wish this weren't protected
For the sole reason that I want to narrate this article. --Crazyswordsman...With SAVINGS!!!! (T/C) 04:48, 10 April 2007 (UTC)
- There's always the half-finished monstrosity here, if you're looking for narration. You could like, finish it, and stuff.--<<
>> 01:21, 2 June 2007 (UTC)
edit and?
mh. i must be dumb as hell, but i _just_ _dont_ _get_ this article. like whats its point? only the discussion? --195.250.188.109 05:37, 11 April 2007 (UTC) - Yes, only the discussion. Stop flaming or die Madretsma 22:04, 18 April 2007 (UTC)
edit Sucker Ninjastar
edit Help?
Made this site on Wikipedia...now I need someone to nominate it for Featured article. I'm not too familiar with wikis. Anyone?
edit Euroipods SuckRocks!
Euroipods both sucks and rocks. Marshal Uncyclopedian! Talk to me! 00:11, 30 April 2007 (UTC)
edit Finally, definitive proof EuroIpods is funnie
Free iPods, being given away in exchange for money....This article instantly negates itself, placing every subsequent statement in the realm of absurdity. The absurd will always be funny when surrounded by the proper context, and worded in the proper tone. Both of those conditions are satisfied by Uncyclopedia and Uncyclopediaists. Therefore, this article is funny. So, nyah. :P
- Of course, the same could be said for This page does not exist, if it were created; yet everyone generally agrees that a created "this page does not exist" would not be funny. --YeOldeLuke 00:59, 14 August 2007 (UTC)
edit SPAM!!
Yeah, I'm doing this so that my name appears in Talk:Euroipods. --Capercorn FLAME! what? UNATO OWS 01:52, 27 August 2007 (UTC)
edit Yay...
...I finally get the joke! It's this HUGE talkpage that is full of pointless arguments!
^.^ Okie, I feel happy now that I have finally got the joke. Well.. I kinda got the original joke... But... nah... didn't entertain me that much. Ah well. sleepygamer 00:31, 11 September 2007 (UTC)
This is acctualy just a set up to to distract you while That Guy steals your ginger kittens
edit Puppies are cute
Discuss. --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 22:01, 1 October 2007 (UTC)
You know it's tough to look cuter than that.
- Oh, I love puppies soooo much! With their little ears and their sweet kisses! And their waggy tails are soooo cute!!! --:05, 1 October 2007 (UTC)
- Ah, puppies are cute yes, as they have their distinctive tiny tails, their typically cute noses and their cutesy little ears, but I think that kittens can be considered cuter by the following formula:
Where C is cuteness, S is the size of the kitten/puppy, T is length of tail and F is the cost of food later in life. For example a kitten that is 5 inches long, with a 2 inch long tail, that is fed on cat food costing £1 a tin, the end result is a cuteness factor of 9. But Take a puppy that is 5 inches long, with a 1.2 inch long tail, that is fed on dog food costing £1.50 a tin, the end result is a cuteness factor of 3.75. Or maybe I'm just spouting bollocks because I like ickle wickle kittens wiv dere cuuute likkle paws and dere cuuute likkle walks and dere ikkle bloo eyes! My 2 cents. *ahem* sleepygamer 12:08, 21 October 2007 (UTC)
- You both have good arguments, but you'd have to agree that my goats, with tails of about 2.5 inches, and bodies of about 10-11 inches, and eat tin cans that cost about 10¢, pretty much trump both kittens and puppies in the mathematical sense. --YeOldeLuke 23:43, 30 October 2007 (UTC)
- I will admit that using my maths, goats are cuter, but I forgot to put in the "popularity" variable.
Where P is popularity. Popularity being the number of pictures using the animal in question used for comedic purposes. Also youtube videos of the animal in question doing humourous or adorable things. I think, if we try a search for pictures via google. An image search for "Pussy" returns about 5,280,000 results. "Puppy" returns 5,060,000, and "Goat" returns 2,860,000 results. I think that makes the kitten I used earlier have a cuteness factor of 5,280,009. The puppy have a cuteness factor of 5,060,003.5, and the goat have a cuteness factor of 2,860,024.99. Thank you. </completebollocks> sleepygamer 00:32, 1 November 2007 (UTC)
- I concur. With the new improved formula, pussies are indeed better than goats. --YeOldeLuke 08:04, 3 November 2007 (UTC)
- I mean, who could resist a nice, tiny, furry pussy? A communist, that's who. sleepygamer 15:31, 3 November 2007 (UTC)
edit This isn't fair
There should be this much talk on MY talk page. I was going to make a joke, but... I'll leave that to the professionals. Definitely not <insert name here> coz they think they're better than Chuck Norris. Just. Not. Possible. --Xaerun 08:37, 30 October 2007 (UTC)
edit I First Found it Stupid, but I'm Slowly Growing a Liking to it...
"At euroipods.com, our goal is to provide you with quality free iPods at a reasonable price." --Narf, the Wonder Puppy/I support Global Warming and I'm 100% proud of it! 04:52, 21 November 2007 (UTC)
edit what could be the correct
Tips: Hollywood or Sparta maybe Crazy?
--Drhlajos 12:50, 24 December 2007 (UTC)
- The correct answer, no doubt, is CNN. --YeOldeLuke 10:50, 27 December 2007 (UTC)
edit Can you believe it?
Euroipods has been kickin' it old school and keeping it real for over two years! --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 19:29, 14 January 2008 (UTC)
- EVERYBODY WANG CHUNG TONIGHT Ж Kalir yes, I play Pokémon 02:45, 27 January 2008 (UTC)
edit You're missing part of the picture.
Viewtest.gif and Signup.gif are missing from the picture at the top of the talk page. 71.220.211.235 18:32, 7 February 2008 (UTC)
- Yup, they were accidentally deleted. You could try rebuilding the missing parts from the (now different) euroipods homepage if you feel like it. Or maybe try and find the same image in some archive somewhere. • Spang • ☃ • talk • 21:03, 07 Feb 2008
- Why did everyone get so pissed off over this. It was obviously a joke, not like a really piss poor article was featured... wait it was... now I'm confused... I want a muff:02, 1 March 2008 (UTC)
edit Hmm...
Hmmm....I get what it's meant to be (I've read the in-jokes thing) but I still don't "get it", it's confusing, and not very funny and random and a bit weird and therefore that is why I LOVE EUROIPODS! Yup. - 04:31 2 AprilSir FSt. (QotF BFF NotM) YTTETalk!Read!Sign!Whore!CMC!Pee!
- Hmm.... I DON'T get what it's meant to be but I FUCKING GET IT. EUROiPODS R EPICZORZ(NSFW LINK)!!!Learn all about the actual ego of DJ Jasper here.Contact Me. 22:39, 9 June 2008 (UTC)
EURO PODS!!! NOW WITHOUT THE ORAL HERPIES
edit Special gift
Does the hot blonde in the advert come with the euroipod? If not, how much do I have to pay extra?
User:HighFructose 18:15, 22 December 2008
edit BLATANT REACTIONARY RESPONSE
Seriously though, I was a big fan of the old page. All that extra crap just seems kind of unnecessary. Except for "free iPods at a reasonable price," that was pretty good.
edit RCMURPHY IS A HUGE TROLL. PLEASE RAPE HIS FACE.
NIGGERS NIGGERS NIGGERS SUPERDICKS 00:41, 21 April 2009 (UTC)
edit Euroipods Wiki
Did anyone notice that Spang took it upon himself to create a Euroipods Wiki? --Meta 500 01:40, 21 April 2009 (UTC)
edit when will Ëüroüpod and Ëüroüpad and other Ëüroüproducts exist?
when will Ëüroüpod and Ëüroüpad and other Ëüroüproducts exist?
- When you sign up as a member of Uncyclopedia and write them well enough so they don't get taken away. Aleister in Chains 21:18 12 4 mmx
edit euroipods.com is mine!
I bought Euroipods.com in exchange for money! --ComradeSlice, 19:22, January 1, 2011 (UTC)
edit Which are better, EuroIpods or AmeroIpods?
edit LOLLLAHLFHOAWOFAJWEPFIAOEWJFPAJEIF:D
I HAVE NO IDEA WHAT I AM DOING--ArcticWolfy (talk) 02:31, August 9, 2012 (UTC)
edit Obligatory spampost is obligatory
As one of the few members of UncycloWikia who actually gives a shit (despite being an aspie (see the aspie page for more info)), I felt it was necasary to join in on this ancient tradition, of discussing Euroipods. Now, I wish all you fine people a good day. May many a flamewar come of this comment. --The Slayer of Zaramoth DungeonSiegeAddict510 17:02, June 3, 2013 (UTC)
edit Just popping in
Posting this for old times' sake in hopes Nintendorulez still lurks around. I think they should feature)}" > 13:35, November 5, 2014 (UTC) | http://uncyclopedia.wikia.com/wiki/Talk:Euroipods | CC-MAIN-2015-18 | refinedweb | 19,255 | 72.36 |
C# coding styles
This is the to be considered the coding standard for Dolittle and is subject to automated verification during automated builds and also part of code-reviews such as those done for pull requests. Some things are common between languages, such as naming.
Values, principles and patterns & practices
It is assumed that all code written is adhering to our core values, core principles and development principles. On top of this we apply patterns that also reflect a lot of the mindset of things we do. Read more here.
Compactness
In general, code should be compact in the sense that any “noise” of language artifacts or similar that aren’t really needed SHALL NOT be used. This to increase readability, not decrease it. Things that are implicit, SHALL be left implicit and not turned into explicits.
Keywords
Use of
var
Types are implicitly provided by the compiler and considered noise during declaration.
If one feel the need for explicitly declaring variables with their type, it is often a
symptom of something else being wrong - such as large methods that you can’t get a feel
for straight away. This is most likely breaking the Single Responsibility Principle.
You MUST use
var and let the compiler infer the type implicitly.
Private members
In C# the private modifier is not needed as this is the default modifier if nothing is specified. Private members SHALL NOT have a private modifier.
Example:
public class SomeClass { string _someString; }
this
Explicit use of this SHALL NOT be used. With the convention for prefixing private members, the differentiation is clear.
Prefixes and postfixes
A very common thing in naming is to include pre/post fixes that describes the technical implementation
or even the pattern that is being used in the implementation. This does not serve as useful information.
Examples of this is
Manager,
Helper,
Repository,
Controller and more (e.g.
EmployeeRepository).
You SHOULD NOT pre or postfix, but rather come up with a name that describes what it is.
Take
EmployeeRepositorysample, the postfix
Repository is not useful for the consumer;
a better name would be
Employees.
Member variables
Member variables MUST be prefixed with an underscore.
Example:
public class SomeClass { string _someInstanceMember; static string _someStaticMember; }
One type per file
All files MUST contain only one type.
Class naming
Naming of classes SHALL be unambiguous and by name tell exactly what it is providing. Example:
// Coordinates uncommitted event streams public class UncommittedEventStreamCoordinator {}
Interface naming
Its been a common naming strategy to include
Iin front of any
interface.
Prefixing with
Ican have other meaning as well, such as the actual word “I”.
This can give better naming to interfaces and better meaning to names.
Examples:
// Implemented by types that can provide configuration public interface ICanConfigure {} // Implemented by a type that can provide a container instance public interface ICanCreateContainer
You SHOULD try look for this way of naming, as it provides a whole new level of expressing intent in the code.
Private methods
Private methods MUST be placed at the end of a class.
Example:
public class SomeClass { public void PublicMethod() { PrivateMethod(); } void PrivateMethod() { } }
Exceptions
flow
Exceptions are to be considered exceptional state. They MUST NOT be used to control program flow. Exceptional state is typically caused by infrastructure problems or other problems causing normal flow to be able to continue.
types
You MUST create explicit exception types and NOT use built in ones. The exception type can implement one of the standard ones.
Example:
public class SomethingIsNull : ArgumentException { public SomethingIsNull() : base("Something was null") {} }
Throwing
If there is a reason to throw an exception, your validation code and actual throwing MUST be in a separate private method.
Example:
public class SomeClass { public void PublicMethod(string something) { ThrowIfSomethingIsNull(something); } void ThrowIfSomethingIsNull(string something) { if( something == null ) throw new SomethingIsNull(); } }
Async / Await
In C# the async / await keywords should be used with utmost care. It is a thing that
without really thinking it through can bleed throughout your codebase without necessarily
a good reason. Alongside async / await comes the
Task type that needs to be there.
The places where threading is necessary, it MUST be dealt with internally to the
implementation and not bleed throughout its APIs. Dolittle has a very good handle on its
entrypoints and from these entrypoints, the need for scaling out across multiple threads
are rarely needed. With the underlying infrastructure being relied on, web requests are
already threaded. Since we enter the system and returns back as soon possible, we have a
good grip of when this is needed. Threads can easily get out of hand and actually slow
down systems.
Exposing IList / ICollection
Public APIs SHALL NOT have mutable types as return types, such as IList, ICollection. The responsibility for maintaining state should be with the owner of it. By exposing the ability for changing state outside the owner, you lose control over who can change state and side-effects occur that aren’t clear. Instead you should always expose immutable types like IEnumerable instead.
Mutability
One of the biggest cause of side-effects in a system is the ability to mutate state and possibly state one does not necessarily own. The example is something creates an instance of an object and exposes public getters and setters for its properties and inviting anyone to change this state. This makes it hard to track which part of the system actually changed the state. Be very conscious about ownership of instances. Avoid mutability. Most of the time it is not needed. Instead, create new objects with the mutation in place. | https://dolittle.io/docs/contributing/guidelines/csharp_coding_styles/ | CC-MAIN-2021-10 | refinedweb | 924 | 54.02 |
README
react-three-Scissor
Multiple scenes, one canvas! WebGL Scissoring implementation for React Three Fiber.
View Demo · Report Bug · Usage
Table of Contents
This demo is real, you can click it! They contains the full code, too. 📦
Why this?Why this?
Havigng multiple WebGl contests within one webpage is generally a bad idea because (from ThreeJS manual):
- The browser limits how many WebGL contexts you can have. Typically that limit is around 8 of them. As soon as you create the 9th context the oldest one will be lost.
- WebGL resources can not be shared across contexts. That means if you want to load a 10 meg model into 2 canvases and that model uses 20 meg of textures your 10 meg model will have to be loaded twice and your textures will also be loaded twice. Nothing can be shared across contexts. This also means things have to be initialized twice, shaders compiled twice, etc. It gets worse as there are more canvases.
To solve this, we create the issusion of these being multiple canvases by having one large one and drawing on very speciifc parts of it. This process is calld Scissoring.
The ThreeJS manual gives us a very complete guide ofhow to do this in ThreeJS but I have finall come around to using React Three Fiber and this library helps to set up Scissoring with relative ease.
UsageUsage
import { ScissorCanvas, // <- R3F Canvas wrapper ScissorWindow, // <- The <div> to use as a "virtual canvas" ScissorScene, // <- The <scene> to be rendered witin a given virtual canvas useScissorFrame, // <- Like useFrame, provides access to the Scissoring render loop useScissorInit, // <- Window into the first run of useScissorFrame. Used to initialize whatever you want } from "react-three-scissor"; function Scene() { // Since each scene has its own camera we need to set up // things like Orbit Controls impatively const orbit = useRef<OrbitControls>(); useScissorInit( ({ camera, element, scene }) => { orbit.current = new OrbitControls(camera, element); }, ["window-1", "window-2"] ); useScissorFrame( (state) => { if (orbit.current) { orbit.current.update(); } }, ["window-1", "window-2"] ); return ( <> {/* Scene will be rendered in window with matching ID */} <ScissorScene uuid={"window-1"}> <mesh>...</mesh> </ScissorScene> <ScissorScene uuid={"window-2"}> <mesh>...</mesh> </ScissorScene> </> ); } function App() { return ( <> <ScissorCanvas //Pass any <Canvas> props gl={{ antialias: true, }} shadows > <Scene /> </ScissorCanvas> {/* Virtual Canvases with unique IDs */} <ScissorWindow uuid={`window-1`} /> <ScissorWindow uuid={`window-2`} /> </> ); } | https://www.skypack.dev/view/react-three-scisor | CC-MAIN-2022-33 | refinedweb | 385 | 54.12 |
With the ever-increasing use of mobile devices over the last few years, it has become more and more important for web developers to anticipate the need for users on these devices. The first step was the ability to cater for different screen sizes, thus creating the need for responsive user interface design. Over time the demands of the users increase, and it is now becoming even more important to provide a high-quality user experience, independent of the network connectivity. Users have become accustomed to using native installable applications when they are offline. They are increasingly expecting the same from browser-based web applications.
This expectation is met by Progressive Web Applications (or PWAs). A PWA is a normal web application that leverages a number of modern browser technologies to improve the overall experience. The core component of a PWA is a service worker. The service worker is a piece of JavaScript code that runs in a separate thread from the main JavaScript application and intercepts any browser requests for resources from the server. tutorial, I will show you how to develop a small PWA using the Vue framework. Vue is a framework that has been around for some time. It has recently gained in popularity as developers have come to realize that Vue strikes a good balance between a low-level hackability and high-level over-design. The application will allow the user to browse through a catalog of books. It will be making use of the OpenLibrary API to provide the data.
Create Your Vue Application
To start you will need to install the Vue command line tool. I will assume that you have some knowledge of JavaScript and the Node Package Manager (npm). I will also assume you have
npm installed on your system. Open a shell and type the command:
npm install -g @vue/cli@3.7.0
This installs the global
vue command. Depending on your system, you might have to run this command using
sudo. Once the Vue command line tool has been installed you can create your first Vue application. Navigate into a directory of your choice and run the command
vue create vue-books-pwa
You will be prompted for a number of choices. In the first question, select Manually select features. This is important because you want to include the PWA features that Vue can install into a new application.
On the following prompt, you are presented with a number of choices. Make sure you select the Progressive Web App (PWA) Support and Router choices. You will be implementing the client using TypeScript, so you will also need to select the TypeScript option. Keep the Babel option selected. You may also want to deselect the Linter choice for this tutorial. In larger applications, I would suggest keeping the linter switched on to ensure a consistent code style across your application. Altogether the choices should look as follows.
? Check the features needed for your project: ◉ Babel ◉ TypeScript ❯◉ Progressive Web App (PWA) Support ◉ Router ◯ Vuex ◯ CSS Pre-processors ◯ Linter / Formatter ◯ Unit Testing ◯ E2E Testing
Once you have made your choices, press Enter to continue. When the wizard asks you Use history mode for router? you must answer no. For all other questions, simply accept the default options.
The
vue create command will create a directory and fill it with a skeleton application. This application consists of an
App base component and two routed components
Home and
About. All components are stored in
.vue files.
A
.vue file can contain three sections identified by XML tags:
<template>,
<style>, and
<script>.
<template>- contains the HTML template that is used to render the component
<style>- contains any CSS that will be applied specifically to that component
<script lang="ts">- contains the component's logic implemented in TypeScript code
Before you start, implementing the components for the Book application, you will need to install some additional libraries that will be using throughout this tutorial. Navigate into the newly created
VueBooksPWA directory and issue the following command.
cd vue-books-pwa npm i vue-material@1.0.0-beta-10.2 axios@0.18.0 vue-axios@2.1.4
This will install the Material Design packages for Vue as well as the axios package that you will be using to create HTTP requests to the OpenLibrary API. Because you are using TypeScript, you will also need to install the type definitions for the Vue Material library. These have to be pulled from their GitHub repository. Run the command:
npm i git+
To make use of the Material Design CSS styles and icons, open
/public/index.html and add the following line to the
<head> section.
<link href="" rel="stylesheet">
The
public/index.html file contains the application’s base HTML container into which Vue will render its output. The contents of the
/public directory are served as static assets. The directory also contains
favicon.ico which you might want to change for production.
The remainder of the application is contained in the
/src directory. This is where all the code of your Vue components, their templates, and styles should be stored. In this directory,
src/main.ts serves as the main entry point to the Vue application. Open this file and paste the following content into it after the import statements, keeping any default contents.
import axios from 'axios' import VueAxios from 'vue-axios' import VueMaterial from 'vue-material' import 'vue-material/dist/vue-material.min.css' import 'vue-material/dist/theme/default-dark.css' Vue.use(VueMaterial); Vue.use(VueAxios, axios);
The main component of the application is defined in
src/App.vue. This file acts as the container for the routed components. Replace the contents of the file with the content below.
<template> <div id="app"> <md-toolbar <span class="branding"> <md-button><router-link{{title}}</router-link></md-button> <md-button><router-link<md-icon>home</md-icon></router-link></md-button> </span> <md-menu <md-button md-menu-trigger><md-icon>menu</md-icon></md-button> <md-menu-content> <md-menu-item><router-linkHome</router-link></md-menu-item> <md-menu-item><router-linkSearch</router-link></md-menu-item> </md-menu-content> </md-menu> </md-toolbar> <router-view/> </div> </template> <script> export default { data: () => ({ title: "Vue Books" }) } </script> <style> #app { font-family: 'Ubuntu', sans-serif; } .branding { flex: 1; text-align: left; } h1 { text-align: center; } </style>
The
<md-topbar> element in the template defines the application’s top bar. It contains a menu with some links to the different sub-components. The splash screen is contained in
src/views/Home.vue. Open it, and add a header and a sub-header.
<template> <div class="home"> <h1>Vue Books PWA</h1> <h2>A simple progressive web application</h2> </div> </template>
The default application created by
vue-cli contains the
About.vue component. You will not be using this component. Instead, the central component that provides the main functionality will be a component in which the user can search for books and view the search results in a table. Rename
src/views/About.vue to
src/views/Search.vue. Replace the contents with the following.
<template> <div class="search"> <form v-on:submit. <div class="input-group"> <md-field <label>Search</label> <md-input</md-input> </md-field> <div class="input-group-button"><md-button<md-icon>search</md-icon></md-button></div> </div> </form> <h2>Search Results</h2> <md-table> <md-table-row> <md-table-head>Title</md-table-head> <md-table-head>Author</md-table-head> <md-table-head>Pub. Year</md-table-head> <md-table-head>View</md-table-head> </md-table-row> <md-table-row <md-table-cell>{{book.title}}</md-table-cell> <md-table-cell>{{book.author_name && book.author_name.join(', ')}}</md-table-cell> <md-table-cell md-numeric>{{book.first_publish_year}}</md-table-cell> <md-table-cell><md-button v-on:<md-icon>visibility</md-icon></md-button></md-table-cell> </md-table-row> </md-table> </div> </template> <script> const baseUrl = ''; const searchData = { books: [], query: '' } export default { data: function (){ return searchData; }, methods: { search() { this.$http.get(baseUrl+'/search.json', {params: {title: this.query}}).then((response) => { this.books = response.data.docs; }) }, viewDetails(book) { this.$router.push({ path: 'details', query: { title: book.title, authors: book.author_name && book.author_name.join(', '), year: book.first_publish_year, cover_id: book.cover_edition_key }}); } } } </script> <style> .input-group { margin-top: 1rem; display: flex; justify-content: center; } .input-group-field { margin-right: 0; } .input-group .input-group-button { margin-left: 0; border: none; } .input-group .md-raised { margin-top: 0; margin-bottom: 0; border-radius: 0; } </style>
This file contains quite a lot, so let’s discuss each section one by one. The top part contains the HTML template. This consists of a search form followed by a table that will display the results of a search.
The
<script> segment of the search component contains the logic. It contains the search query and the results of the search in the
books array. The component contains two methods. The
search() method takes the search terms and performs a
GET request to the OpenLibrary API.
When the result comes back, the
books array is filled with the search results. The
viewDetails method will cause the router to navigate to the
Details component (which you will implement shortly). Each entry in the table contains a button linked to this method, allowing the user to view the book's details. Finally, the third section in
Search.vue contains some CSS styling.
The last component that needs implementing shows the book's details. Create a new file
src/views/Details.vue and fill it with the code below.
<template> <div class="details"> <h1>Book Details</h1> <div class="content"> <md-card <h3>{{book.title}}</h3> <img v-bind: <h4>Authors</h4> <p> {{book.authors}} </p> <h4>Published</h4> <p>{{book.year}}</p> </md-card> </div> </div> </template> <script> export default { data: function() { return { book: { title: this.$route.query.title, cover_id: this.$route.query.cover_id, authors: this.$route.query.authors, year: this.$route.query.year, } } }, methods: { getImageSrc() { return ""+this.book.cover_id+"-M.jpg" } } } </script> <style> .content { display: flex; justify-content: center; } .details-card { max-width: 800px; padding: 1rem 2rem; } .details-card p { padding-left: 2rem; } </style>
This component simply shows the book’s details obtained from the route’s query parameters. The only method,
getImageSrc(), returns the URL of the cover image.
When the application was generated by the
vue command line tool, it also created a
HelloWorld component at
src/components/HelloWorld.vue. This is not needed in the application, so you can delete it. If you delete this file, you’ll need to delete references to it in
src/views/Home.vue as well.
In order for a sub-component to be shown, it must be registered with the router. Open
src/router.ts and replace it with the code below.
import Vue from 'vue' import Router from 'vue-router' import Home from './views/Home.vue' import Search from './views/Search.vue' import Details from './views/Details.vue' Vue.use(Router) const router = new Router({ routes: [ { path: '/', name: 'home', component: Home }, { path: '/search', name: 'search', component: Search, }, { path: '/details', name: 'details', component: Details, } ] }) export default router;
This completes the basic application. To try it out, you can run the command:
npm run serve
Open a browser and navigate to. You can search for a book and click on the eye icon to look at the book’s details.
Add Secure Authentication to Your Vue PWA
In many situations, you will want to restrict access to parts of your application to users that are registered. You could start implementing your own user registration and sign-in mechanism. This is not only cumbersome but can leave you with security risks if the user registration is not tested properly. Fortunately, Okta provides a single sign-on service that lets you add safe user authentication with little effort. In this section, I will be showing you how to restrict access to the
/search and
/details routes to registered users.
To start, you need to create an account with Okta. Visit developer.okta.com and click the Sign Up button. On the next screen, enter your details and click on Get Started.
Once you have finished the registration process, you will be taken to the developer dashboard. Each application that you want to use with Okta authentication must be registered and will receive its own client ID. Click on Add Application and, on the next screen, select Single Page Application. When you click on Next, you will see a screen with settings. Make sure the port is set to
8080. This is the port that Vue uses to serve applications.
Once you are finished you will be given a
clientId. This is needed in your application when configuring Okta. In your application directory now run the following command.
npm i @okta/okta-vue@1.1.0 @types/okta__okta-vue@1.0.2
This will install the Okta SDK for Vue. To set up Okta with your application, open
src/router.ts. Add the following lines after the import statements.
import Auth from '@okta/okta-vue'; Vue.use(Auth, { issuer: 'https://{yourOktaDomain}/oauth2/default', client_id: '{yourClientId}', redirect_uri: window.location.origin + '/implicit/callback', });
The
Vue.use(Auth, ...) statement sets up Okta. You will need to copy the client ID from your Okta developer console as the
client_id parameter.
In the
routes array, add the following entry.
{ path: '/implicit/callback', component: Auth.handleCallback() }
This route will handle the callback from Okta after the user has logged in.
Add a
beforeEach() condition to the router at the bottom that sets up a redirect if authentication is required.
router.beforeEach(Vue.prototype.$auth.authRedirectGuard());
Finally, you have to add the authentication guards. In the router entries for the
/search and
/details, add the following property.
meta: { requiresAuth: true, },
With this, your application is protected. If you now try to navigate to the
/search route, you will be redirected to the Okta login page. In addition to protecting certain routes, the application should also let the user know if the user is logged in and provide a direct link to the Okta login page. Open
src/App.vue. In the template section add the following into the
<md-toolbar>.
<md-button Logout </md-button> <md-button v-else v-on: Login </md-button>
Replace the contents of the script section with the following.
export default { data: () => ({ title: "Vue Books", authenticated: false }), created() { this.authenticated = this.isAuthenticated(); }, watch: { $route: "isAuthenticated" }, methods: { async isAuthenticated() { this.authenticated = await this.$auth.isAuthenticated(); }, login() { this.$auth.loginRedirect("/"); }, async logout() { await this.$auth.logout(); await this.isAuthenticated(); this.$router.push({ path: "/" }); } } };
The flag
authenticated keeps track of the login status. This controls the visibility of the Login and Logout buttons. This completes the implementation of the Vue Books application.
Create Your PWA in Vue
Until now, I have guided you through creating a standard web application. The only step towards creating a PWA was the choice to support PWAs during the initial set-up of the application. It turns out that this is almost everything that needs to be done. You can check the performance of the application using Google Chrome’s Lighthouse extension.
To test your application properly, you need to serve it in production mode. First, build the application by running the command:
npm run build
This will compile the application into the
dist/ subdirectory. Next, you need to install the
http-server-spa package by running the following command.
npm install -g http-server-spa@1.3.0
Then start the server by running:
http-server-spa dist index.html 8080
Open the Chrome browser and navigate to. You can install the Lighthouse extension or use the Audits tab in Chrome Developer Tools to run Lighthouse.
If you have the extension installed, you will notice a little Lighthouse icon in the navigation bar. If you click on it a little panel will open. Select Generate Report and Lighthouse will start analyzing your application. There are a number of checks and you should get a score of 92 on the Progressive Web Application score. If you served the application using a secure server through HTTPS protocol you would likely score 100.
You could stop here and say that you have created a perfectly scoring PWA. But you can do a little better. If the application is modified to cache past search requests, a user can re-issue past searches and still get results, even if the device is offline. The
axios-extensions library includes a caching layer that can be used out of the box. Install the extensions.
npm i axios-extensions@3.0.4
Open
src/main.ts and add the following import.
import { cacheAdapterEnhancer } from 'axios-extensions';
Then replace
Vue.use(VueAxios, axios) with the following.
Vue.use(VueAxios, axios.create({ adapter: cacheAdapterEnhancer(axios.defaults.adapter as any) }));
That’s it! You have created a PWA with Vue. A service worker caches access to the server resources. Requests to the external API are cached allowing the user to use the application without a network connection. The
vue command line tool also created a manifest in
public/manifest.json and a set of icons in
public/img/icons. This allows the browser to install the application locally. For a production application, you should edit the manifest and update the icons.
Learn More about Vue and PWAs
This tutorial showed you how to create a PWA with Vue. PWAs are becoming increasingly popular in a world with more and more mobile devices with flaky internet connections. Vue is an excellent framework for developing web applications and makes it simple to add PWA features. As you have seen, adding authentication with Okta is pretty easy too.
You can find the source code for this tutorial on GitHub at oktadeveloper/okta-vue-books-pwa-example.
If you want to learn more about Vue, PWAs, or secure authentication, check out the following links:
- Build a Single-Page App with Go and Vue
- The Ultimate Guide to Progressive Web Applications
- Add Authentication to Your Angular PWA
- Build Your First Progressive Web Application with Angular and Spring Boot
To be notified when we publish future blog posts, follow @oktadev on Twitter. If you prefer videos, subscribe to our YouTube channel.
Discussion (0) | https://dev.to/oktadev/build-your-first-pwa-with-vue-and-typescript-51b0 | CC-MAIN-2022-33 | refinedweb | 3,046 | 50.73 |
![if !IE]> <![endif]>
Alternative Languages
The languages described so far in this chapter have been extensions to what might be called standard C/C++. In some ways, C and C++ are not ideal languages for parallelization. One particular issue is the extensive use of pointers, which makes it hard to prove that memory accesses do not alias.
As a consequence of this, other programming languages have been devised that either target developing parallel applications or do not suffer from some of the issues that hit C/C++. For example, Fortress, initially developed by Sun Microsystems, has a model where loops are parallel by default unless otherwise specified. The Go language from Google includes the concept of go routines that, rather like OpenMP tasks, can be exe-cuted in parallel with the main thread.
One area of interest is functional programming. With pure functional programming, the evaluation of an expression depends only on the parameters passed into that expres-sion. Hence, functions can be evaluated in parallel, or in any order, and will produce the same result. We will consider Haskell as one example of a functional language.
The code in Listing 10.16 evaluates the Nth Fibonacci number in Haskell. The lan-guage allows the return values for functions to be defined for particular input values. So, in this instance, we are setting the return values for 0 and 1 as well as the general return value for any other numbers.
Listing 10.16 Evaluating the Nth Fibonacci Number in Haskell
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
Listing 10.17 shows the result of using this function interactively. The command :load requests that the module fib.hs be loaded, and then the command fib is invoked with the parameter 10, and the runtime returns the value 55.
Listing 10.17 Asking Haskell to Provide the Tenth Fibonacci Number
GHCi, version 6.10.4: :? for help
Prelude> :load fib.hs
[1 of 1] Compiling Main ( fib.hs, interpreted )
Ok, modules loaded: Main.
*Main> fib 10
55
Listing 10.18 defines a second function, bif, a variant of the Fibonacci function. Suppose that we want to return the sum of the two functions. The code defines a serial version of this function and provides a main routine that prints the result of calling this function.
Listing 10.18 Stand-Alone Serial Program
main = print ( serial 10 10)
fib 0 = 0 fib 1 = 1
fib n = fib (n-1) + fib (n-2)
bif 0 = -1 bif 1 = 0
bif n = bif (n-1) + bif (n-2)
serial a b = fib a + bif b
Rather than interpreting this program, we can compile and run it as shown in Listing 10.19.
Listing 10.19 Compiling and Running Serial Code
C:\> ghc -O --make test.hs
[1 of 1] Compiling Main ( test.hs, test.o )
Linking test.exe ...
C:\> test
2
1
The two functions should take about the same amount of time to execute, so it would make sense to execute them in parallel. Listing 10.20 shows the code to do this.
Listing 10.20 Stand-Alone Parallel Program
import Control.Parallel
main = print ( parallel 20 20)
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
bif 0 = -1
bif 1 = 0
bif n = bif (n-1) + bif (n-2)
parallel a b
= let x = fib a
y = bif b
in x `par` (y `pseq` (x+y))
In the code, the let expressions are not assignments of values but declarations of local variables. The local variables will be evaluated only if they are needed; this is lazy evaluation. These local variables are used in the in expression, which performs the com-putation. The import statement at the start of the code imports the Control.Parallel module. This module defines the `par` and `pseq` operators. These two operators are used so that the computation of x=fib a and y=bif b is per-formed in parallel, and this ensures that the result (x+y) is computed after the calcula-tion of y has completed. Without these elaborate preparations, it is possible that both parallel threads might choose to compute the value of the function x first.
The example given here exposes parallelism using low-level primitives. The preferred way of coding parallelism in Haskell is to use strategies. This approach separates the com-putation from the parallelization.
Haskell highlights the key advantage of pure functional programming languages that is helpful for writing parallel code. This is that the result of a function call depends only on the parameters passed into it. From this point, the compiler knows that a function call can be scheduled in any arbitrary order, and the results of the function call do not depend on the time at which the call is made. The advantage that this provides is that adding the `par` operator to produce a parallel version of an application is guaranteed not to change the result of the application. Hence, parallelization is a solution for improving performance and not a source of bugs.
Related Topics
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/Alternative-Languages_9538/ | CC-MAIN-2022-40 | refinedweb | 865 | 63.7 |
Return the number of rows affected by a statement
#include <qdb/qdb.h> uint64_t qdb_rowchanges( qdb_hdt_t *hdl qdb_result_t *result );
qdb
This function returns the number of rows that were affected in a statement. It first looks in result (if the QDB_OPTION_ROW_CHANGES option has been set by qdb_setoption()), returning the number of rows for the statement that produced the result. If result is NULL, or QDB_OPTION_ROW_CHANGES is off, the function queries the database handle hdl and returns the information about the last executed statement.
If this function returns 0, check errno to make sure that it is EOK, indicating that no row was affected (you should set errno to 0 before calling this function if you want to distinguish between an error and 0 rows). If errno is not EOK then there was an error with the request.
QNX Neutrino
qdb_setoption(), qdb_statement() | http://www.qnx.com/developers/docs/6.5.0_sp1/topic/com.qnx.doc.qdb_en_dev_guide/api/qdb_rowchanges.html | CC-MAIN-2022-27 | refinedweb | 141 | 59.74 |
On Dec 7, 2005, at 4:00 PM, William A. Rowe, Jr. wrote:
> Brian W. Fitzpatrick wrote:
>> On Dec 7, 2005, at 3:26 PM, William A. Rowe, Jr. wrote:
>>> fitz@apache.org wrote:
>>>
>>>> Prefix non-static symbols with 'apr__' to avoid namespace
>>>> conflicts.
>>>> * random/unix/sha2.h, random/unix/sha2_glue.c, random/unix/sha2.c:
>>>> Rename SHA256_Init, SHA256_Update, SHA256_Final,
>>>> SHA256_Transform,
>>>> SHA384_Init, SHA512_Init, SHA512_Final, SHA384_Final,
>>>> SHA512_Update,
>>>> SHA384_Update, and SHA512_Transform, , to apr__SHA256_Init,
>>>> apr__SHA256_Update, apr__SHA256_Final, apr__SHA256_Transform,
>>>> apr__SHA384_Init, apr__SHA512_Init, apr__SHA512_Final,
>>>> apr__SHA384_Final, apr__SHA512_Update, apr__SHA384_Update, and
>>>> apr__SHA512_Transform.
>>>
>>> Are these in fact 'not for external use'?
>> That is correct. You may notice that there are no symbols for
>> these functions in APR's public headers.
>
> Correct, but why? ...
*sigh* They're merely meant to be consumed by the yet unfinished
PRNG. Ask Ben Laurie for details.
>>> If they are for export, why the choice of the extra underbar?
>>> Given that we
>>> do export MD5 for everyone's use, and the universal contention
>>> is that MD5 is,
>>> if not today, then, dead by tomorrow for most security purposes.
>
> I didn't see your comment to this, it seems these -should- be
> exported to me.
And I'm telling you that they should NOT be exported.
>>> It seems these should be public, and the '__'s will inevitably
>>> confuse some
>>> devs, as well as not following our conventions.
>> Then we should document it as our convention. :-)
>
> Or something +1... _apr_foo or apr__foo is fine here.
>
> We should also spell out that any such __ entry points are NOT
> subject to our
> revisioning policy, shoot yourself in the foot at your own risk.
If they're not in the public headers, I don't think it's a problem.
This is not a big deal--I am correcting an oversight.
-Fitz | http://mail-archives.apache.org/mod_mbox/apr-dev/200512.mbox/%3C4755D6FD-F684-446A-BA44-8940D93DB9D1@red-bean.com%3E | CC-MAIN-2014-23 | refinedweb | 292 | 76.01 |
Transcript
Brikman: This talk is called "How to Test Infrastructure Code." I will go through some automated testing practices that we found for tools like Terraform, Docker, Packer, Kubernetes. There will be a lot of code, so get ready to read code, this is hands-on. I'll try to run some of the code, we'll see how that goes.
First, I'm going to start with a bit of an observation, something I've noticed about the DevOps industry, Ops, sysadmins, whatever you want to call them, and that is that we're all living in a bit of a world of fear. This is the predominant emotion that I'm seeing from most of the people that I work with. They're just living in fear - fear of things like outages, fear of security breaches and data loss, and just generally fear of change. People are constantly afraid to change things because they don't know how late they're going to be up, how bad it's going to be, or are just terrified.
We know what fear leads to. Fear leads to anger, anger leads to hate, hate leads to suffering - the great Scrum Master Yoda taught us these lessons. We all know what suffering leads to. Most teams seem to deal with this in two ways. One, a lot of drinking and smoking. Number two, deploying less and less frequently. It's scary, it's terrifying, so you just avoid it and you do it less and less often. Unfortunately, both of these solutions just make the problem much worse. Your releases get bigger, there's more risk. This actually makes the whole problem a lot worse, and then, you end up in this sort of rule: sixty percent of the time it works every time.
Automated Tests
I don't want to live in that kind of world. I think there's a better way to deal with this constant state of fear and that is automated testing. I don't want to make the claim that this is going to solve all the problems in the world, it's going to make all your fears go away, but automated tests do have one very interesting impact. When you see teams that do a good job with it, this is exactly what you see, which is, instead of fear, you start to see confidence. That's what tests are about. Tests are not about proving that your code works, they're not some perfect thing that says, "Yes, everything's great," they are about confidence. It's about emotions, it's about how you feel about making those changes. That's really important because you can fight fear with confidence. That's really the key.
We do mostly know how to write automated tests for application code. If you have an app built in Ruby, or Go, or Python, or any of these general-purpose languages, we more or less know how to test these things. But how do you test infrastructure code? If you have a whole pile of Terraform code, how do you know that the infrastructure it deploys works the way you expect it to? Or, if you have a pile of Kubernetes code, how do you know that the way it deploys your services is the way you actually need it to work? How do you test these things?
That's the goal of the talk. I'll share with you some ideas, some insights on how to test with some of these tools and we will look at a whole bunch of code. Hopefully by the end of it, you'll at least have some ideas of how to sleep better at night, how to be a little less afraid.
I am Yevgeniy Brikman, also go by the nickname Jim, which most people find a little easier to pronounce. I'm the co-founder of a company called Gruntwork, and this is where a lot of this automated-testing experience comes from. At Gruntwork, we've built a library of hundreds of thousands of lines of reusable code for Terraform, and Kubernetes, and Docker, etc., and it's used in production by hundreds of companies. The way our tiny company is able to maintain all of that code and keep it working, as the whole world around us is changing, is through a lot of automated testing. We spend a lot of time thinking about this.
I'm also the author of a couple books. There's "Terraform, Up & Running," that's actually the old cover, I need to update this slide as well. Second edition is out, go get it. "Hello, Startup," also talks a lot about the software delivery process.
Here is what we're going to talk about today. We're going to look at the various testing techniques that are out there for infrastructure code, look at static analysis, unit testing, integration testing, end-to-end testing. These are loose categorizations. Some people become very religious about what each of these terms means. These are more of a helpful mental model to navigate the space.
Static Analysis
We've got a lot to cover. We'll get started with static analysis. The idea here is, you want to be able to test your code without actually running the code or, in the case of infrastructure code, without actually deploying anything for real. That's the goal of static analysis, "Look at my code, don't run it, tell me if there's a bug or if there's some sort of issue."
There are a few categories in here. Again, these are not perfect groupings, there's some overlap between them, just a useful mental model for navigating here. The first one are the compilers, the parsers, the interpreters for whatever language you're using. The idea is these things are checking your syntax, the structure of your code. The very basic thing "Does it compile? Is this valid?" YAML, HCL, Go, whatever language you're using.
There's a variety of these tools. For example, for Terraform you have the terraform validate command. I'll show you a really quick example of that. I have a little bit of Terraform code. We'll deal with what the code is in a minute. It looks like this, nothing fancy, using a very simple module. In here, I can run the validate command. It tells me, "Everything looks good." Then, if I mess up the code, like I make some silly typo in here and I run terraform valid again, it will give me an error. That is a very basic level of testing that you can do for your code. "Scan it, tell me if the variables that I'm referencing are actually defined. Tell me if the syntax is valid, that I missed a curly brace." There are similar commands for Packer, and in the Kubernetes world, kubectl has dry-run and a validate flag that you can add that'll do something pretty similar.
Moving one level up from that, you want to catch not just syntactic issues but also common mistakes. There's a whole series of these tools. By the way, these slides will be available after the talk so don't worry about all these links, it should be easy for you to grab. There's a whole series of these tools. For Terraform, there's conftest, which actually works at more than just Terraform, terraform validate, tflint, etc. A whole bunch of these tools will read your code, statically analyze it, and try to catch common errors. One of the kind of idiomatic examples these tools give you is you have a security group that allows all inbound traffic. In other words, a firewall that's way too open. Something like that can be caught using tools like this, in a lot of cases. These are good to plug into your CI/CD pipeline, they run in seconds, they're going to catch a bunch of common mistakes, which again is better than having no testing at all.
Third group, which I don't have a good name for, I'll just call it dry run. Here we actually are going to execute the code but we're not going to deploy anything, it's not going to have any effect on the real world. We are running the code a little bit here, so it's a kind of an in-between between static analysis and unit testing. We're going to give some sort of a plan output and be able to analyze that. In the Terraform world, there's some nice equivalents to this, so there's actually terraform plan command that I can run here. On this module, I can run my plan command. By the way, this little thing at the front, this is just how I authenticate to AWS, ignore that. This is the actual command, terraform plan. If I run that, it'll make some API calls and it'll tell me what the code is going to do without actually changing anything in the world. Here's my plan output, it shows me that it's going to deploy some lambda functions, some API gateway stuff, etc. You can analyze this plan as a form of testing.
There are some tools that help you with that. For example, in the Terraform world, there's HashiCorp Sentinel and terraform-compliance, both of them can run terraform plan and statically analyze that thing and catch a bunch of common errors in a static way.
In Kubernetes world, there's a server-dry-run. I think this is an alpha feature, actually it's pretty new, which will actually take your YAML, your configuration, and send it to the API server. That server will process it, it just won't save the results, and so, it's not going to affect the world. Again, this is a good way to check, "Does my code more or less function to any extent?"
Those are quick little overview of the static analysis tools. What's nice about them, they run fast, easy to use, you don't have to learn a whole bunch of stuff. The downside is they're very limited in the kinds of errors they can catch. If you're not doing any infrastructure testing at all, at least add static analysis. It really just takes a few minutes of your time and it'll catch a bunch of these common mistakes. If you can do a little more, let's do a little more.
Unit Tests
That's where unit testing comes in. Now we're going to get a little more advanced. The idea with unit testing is you want to be able to test, as the name implies, a single unit of your code in isolation. In this section, we're going to go through a few things. We'll introduce the basics of unit testing. I'll then show a couple examples for two different types of infrastructure code, so we'll look at Terraform, and Docker, and Kubernetes, and then, we'll talk about cleanup.
The basics - first thing to understand about unit testing is what's a unit. I've had a lot of people come up to me and say, "I have 50,000 lines of code, deploys this enormous infrastructure. How do i unit test it?" Well, you don't, that's not a unit. Unit testing with general-purpose languages is on a single method or a single class. The equivalent with infrastructure code is going to be a single module, whatever "Module" means in the language and tools you're using. Your infrastructure should be broken up into a bunch of small pieces. If it's not, that's actually step one to being able to unit test it. If you right now have a Terraform file, or CloudFormation, or any other language with 50,000 lines of code, that's an anti-pattern, break it up into a bunch of small standalone pieces. One of the many advantages you'll get is you can unit test those pieces.
Next thing is, with app code, when you're testing those units, when you're testing a single method or class, you can typically isolate away the rest of the outside world - all of your databases, filesystem, web services. You isolate them and you can test just the unit by itself, which is good, because then you can test very quickly, and the tests are going to be nice and stable.
If you actually go look at most infrastructure code - so here's some Terraform code - what's this code doing? All it's doing is talking to the outside world. That's 99% of what your code is doing, whether it's Kubernetes, CloudFormation, AWS. All it really does is talk to the outside world. If you try to isolate the outside world, there's really nothing left to test.
The only real way to test infrastructure code beyond static analysis is by deploying it to a real environment, whatever environment you happen to be using. That might be AWS, or Google Cloud, that might be your Kubernetes cluster you actually have to deploy because that's what the code does. If you're executing it, a deployment is the result.
Key takeaway: there is no pure unit testing for infrastructure code in the way that you might think of it for application code. This means your test strategy looks a little more like this. You're going to deploy the infrastructure to a real environment, you're going to validate that the infrastructure works, and I'll show you a few examples how to do that. Then, at the end of the test, you undeploy the infrastructure again. This is where the terminology gets kind of messy, this is more of an integration test, but we're testing one unit, one module, so I prefer to just stick with the word unit test and just think of it that way.
There's a bunch of tools that can help you implement this strategy, not a comprehensive list, this is just some of the more popular ones. Some of them will do the deploy and undeploy steps for you, some of them expect you to do the deploy and undeploy outside of the tool yourself. Terratest, for example, can do deploy and undeploy, can do validation, and it integrates with a whole bunch of tools, including Terraform, and Kubernetes, and Docker. There's a bunch of other tools, some that are specific to Terraform, some that are specific to checking servers. Definitely check these out, all the links are in the slide deck and you'll have access to that soon. In this talk, we're mostly going to use Terratest, but just bear in mind that the same technique will work with pretty much any tool.
Let's try to write a unit test here. This talk has a bunch of sample code. There's some Terraform code, some Kubernetes, and the automated tests for it. I don't know that that's the best link, I should've gone with a slightly shorter link. It's in the gruntwork-io/org, it's called "Infrastructure as Code Testing Talk," I'll tweet this one out, it'll be in the slide deck. All the code I'm showing you here, you can check it out after the talk. One of the things you'll find in that sample code is a simple little hello-world application that we can test. Let me actually deploy that little application. I'm just going to deploy this thing in the background, and then, I'll walk through the code and show you what this thing is actually doing.
Here's the hello-world app. It's Terraform code looks a little bit like this, very simple code, that's really all there is to it. It's using a module to deploy a Serverless application. For the purposes of an example, I'm using AWS Lambda and API Gateway here just because they deploy quickly so the talk goes faster if I do this. This module lives in the same repo, here it is. If you're interested in the code, it does more or less what you expect, to play a lambda function, create an [inaudible 00:15:27] rule for it, deploy API gateway, etc. This code also outputs the URL of this little endpoint, at the end, and what we're actually running in AWS Lambda is some JavaScript code. This is basically the hello-world example, so it just says "Hello world," and returns 200 OK.
It's a really simple piece of code, it's deployed in the background. I can now copy and paste this URL, run curl on it, hit Enter, and there we go, we've got our nice hello world. This is a nice thing for us to test and play around with here during the talk. Let me actually undeploy it now just so I don't forget about that. What you notice is, what I'm doing right now is I'm manually testing this thing. What did I do? Deploy, validate, and now here, I'm doing the undeploy. We're going to actually write a unit test that does exactly these steps but automatically, in code. I'll walk through what the code does in the slide deck, and then, I'll show you the actual code snippet in a second, we'll run it and see if it works.
Since we're using Terratest and Terratest is a Go library, we're going to write the test in Go. If you don't know Go, don't panic, it's not a hard language and not critical to understand everything about the talk. It's more of the concept just to get the mindset right. We create a hello_world_app_test.go. This is the basic structure of the test, and I'll walk through this line by line. This is actually almost the entire unit test. The first thing we do is we say, "Ok, here are my options for running Terraform. My code lives in this examples/hello-world-app folder." I then use a Terratest function, this terraformInitAndApply to run terraform init and terraform apply. This will actually deploy into my AWS account. I'm then going to validate that the code is working, and I'll show you the contents of that in just a second. Then, at the end of the test, we're going to run terraform.Destroy.
This is defer. If you're not familiar with Go, defer basically says, "Run this before the function exits, no matter how it exits." Even if the test fails, it'll always run defer, similar to a try finally or ensure in other languages. That's a test; apply, validate, destroy, that's really what we're doing. The validate isn't particularly complicated. We're using a Terratest helper to read that URL output. Then, we're using another helper to make HTTP requests to that output. We're looking for a 200 OK that says, "Hello world." We're going to retry it a few times because the deployment is asynchronous, so it's not guaranteed to be up and running the second apply finishes.
That's the whole test. Let me run it really quickly, it'll take about 30 seconds to run. I'll jump into the test folder, I'll run go test. This is our hello-world unit test here. I'll let that thing run in the background for about 30 seconds. Let's look a little more at the code. What I'm actually running here is here's my Test folder, here's hello_world_app_unit_test, here's the Go code. It's pretty much identical to what I showed you in the slide deck, there's one little piece that I'll explain in a few minutes. The rest is exactly as I said, terraformInitAndApply, validate, destroy. The validate basically reads the output and does a bunch of HTTP requests in a retry loop.
Speaking of HTTP requests, the reason we're using HTTP as the infrastructure I'm deploying here is a web service. It makes sense to validate it by making HTTP requests. Of course, you might be deploying other types of infrastructure and there's different ways to validate those. For example, if you're running a server that's not listening on any port, then you might want to validate it by SSHing to that server and checking a whole bunch of properties. Terratest's ways to do that, InSpec, all those other tools, they're really good at that. If you're running a cloud service, you might want to use the cloud API's to verify that it works. If you're deploying a database, you might want to run SQL queries, etc. Just bear in mind that validation is very use-case-specific, but for the purposes of this talk, it'll just always be HTTP requests.
Running tests, you authenticate to whatever environment you're deploying to; in this case, I'm with indicating to AWS. Then, you run the go test command to actually kick off the test suite. If I jump back to the terminal, it should be done running the tests. That's always good to see, the word PASS. It took about 35 seconds. The log output, unfortunately, is hard to read because the font size is kind of wrapping around. If you dig through here, you'll see that the test ran terraformInit. Then, it ran terraform apply. Here's the terraform-apply log output. It deployed the Serverless app, ran terraform output to fetch the URL, it then started making HTTP requests, got the response it expected, ran terraform destroy. In 30 seconds, I can now check that this module is working the way I expect to. I can run this after every single commit. That's huge because I just went from a pile of code that, "I don't know, maybe works, maybe doesn't. Who knows? I guess our users will find out," to, "I can test this after every single commit to this code."
That is the unit testing example for Terraform. Just to make the point that this is not something specific to Terraform, let's do a unit test for something a little different. We're going to look at some Docker and Kubernetes code here as well. Let me jump back into my IDE, the sample code is in that same repo. Up here, we have our docker-kubernetes example and there's really just two files. One is a Docker file, and this defines a Docker image for a really simple hello-world server. In the real world, this would be your Ruby app, your Java application, whatever it is that you're building, but for this talk, it's just a really simple hello-world server. The other thing in here is this blob of YAML, this is used with Kubernetes, it defines a deployment. If you don't use Kubernetes, this is basically a way to say, "I have this Docker container over here. I want to deploy one copy of it, and I want to stick a LoadBalancer in front of it that will listen on port 8080. Deploy the thing, put a LoadBalancer so I can access the thing."
I can run this thing as well. I'll show you how I test this thing manually first, and then, we'll write the automated test for it. I'll jump into the /examples/ folder. First thing to do is build my Docker image so you can do that with the docker build command. That will run pretty quick because it's all coming from cache, I've run this before. If you're running it from scratch, it takes 30 seconds to a minute.
That created this Docker image that I can now deploy to a Kubernetes cluster. I can deploy any Kubernetes cluster I want to one running in AWS or in GCP. If you have the latest Docker for desktop app, Kubernetes is actually built-in. You have one running on your own computer or you can push a button to turn it on, which is pretty neat because I can also now test with Kubernetes completely locally. What I can do is I can run kubectl apply on that deployment.yml file. I hit Enter and that thing will deploy my service. We can see if that worked. We can go fetch the pods, so there's my container, it's now in running status. Then I can do get services. There's the service in front of it, that's that little LoadBalancer, and you can see it's external IP as localhost and it's listening on port 8080. Which means I can now curl 8080 and get a nice little hello world. Ok, we got a little Docker example, it's running Kubernetes. Then, of course, at the end, we can also delete it by running the kubectl delete command.
That's how I test manually. How would I test the exact same thing with a unit test, an automated test? As you can probably guess, the structure is going to look very very similar to what we just did for the Terraform unit testing. I'll walk through it again in the slide deck. We created docker_kubernetes_test.go and that's the basic structure of the test. I'll go through it. The first thing we do is build the Docker image, and I'll show you the contents of that method in just a moment. Then we say, "Ok, the Kubernetes deployment is defined in this file. I want to authenticate to my Kubernetes cluster." I'm just using all the defaults, which means it'll just use whatever my computer is logged into, which is the Kubernetes running locally. We run kubectl apply using a Terratest helper, we validate, I'll show you the contents of that in a sec. Then, at the end of the test, using that defer keyword, we run kubectl delete. There's no magic. All I'm doing is taking the exact same steps I was doing manually and we're just writing them down in code. The value terratest brings is just to give you a bunch of nice helper methods for doing this, but you can find similar helpful methods or write them by yourself.
Let's look at the two functions I mentioned. This is the buildDockerImage function, it's using another Terratest helper, this docker.Build, and it's basically just telling it where the Docker file is located and what to tag it with. Not particularly complicated. Then, the validate function looks very similar. We wait until the service is available, basically Kubernetes is completely asynchronous, so it can take a few seconds to actually deploy, depending on the cluster you're using. Then, we start making HTTP requests to this thing, just like we did with the hello-world app. The way we get the URL for a Kubernetes service is to basically automate those steps I showed you with kubectl get pods, get services. I just put that into this method, so there's a GetService and a GetServiceEndpoint method.
To run this test, you will authenticate to some Kubernetes cluster. As I said, I'm already authenticated to the one running locally. At this point, I can just run that test. Let's do that. Just go test, and there it is, Kubernetes. Hit Enter. This test should run very quickly because it's all running locally. There we go, that took a grand total of 4.69 seconds. What did the test do? The test built my Docker image, so you can see the output there. It's all running from cache so that runs especially fast. Then, it configures kubectl, it ran kubectl apply. You can see it started making HTTP requests, and actually the first one failed because Kubernetes is asynchronous, that's why we do it in a retry loop. After another try or two, it succeeded, and then, it cleaned everything up again at the end of the test. In 5 seconds, you can now add this even as a pre-commit hook if you really wanted to. Or, after every commit, you can check if these Kubernetes configurations you're writing, not just that they're syntactically-valid, which is good to do with static analysis, but that they actually deploy a working service the way you expect to.
I showed you the code in the slide deck but the actual code for that test is very similar, buildDockerImage. Here's our space. I skip the name spacing thing, I'll come back to that in a little bit, and then, basically here it is, KubectlApply, delete, validate. That is unit testing. A lot of people see this and they're , "Is that it? There's no magic? There's no magical thing that does this for me?" No, that's it. You're just automating the things you would've done manually. That's the basis of unit-testing infrastructure code, you deploy it for real. For me, this is well worth it, because right now, with these unit tests, I have a lot of confidence in this code. I know that if somebody changes the code and does something silly, these tests will almost certainly fail and will catch it before it makes it to production. That's worth a little bit of work.
I'll mention one more thing about unit testing, which is cleaning up after those tests. Especially tests for Terraform, CloudFormation, things like that, they're spinning up and tearing down all sorts of resources in your Google Cloud, AWS, Azure accounts. For example, we have one repo that deploys the Elasticsearch stack, an Elk cluster. After every commit, that spends up something like 15 Elk clusters and various configurations, pokes at them for a while, and then, tears them all down. That's a lot of infrastructure after every single commit.
You definitely want to have a completely separate "sandbox" account for automated testing. Don't use production, I hope that's self-evident. You might not even want to use some of your existing staging or dev accounts where human beings are using it just because of the volume of infrastructure that's going to be coming up and down will be pretty annoying. We usually have a completely isolated account used solely for automated testing.
There's one other reason to do that, which has to do with cleanup. The tests that I showed you, they all run terraform destroy or kubectl delete, they all do clean up after themselves, but occasionally that fails. You might have a bug in your test, somebody might hit ctrl + C, something might crash. You don't want a whole bunch of stuff left over in your testing account. There are some tools out there that can clean everything up, and the tool, for example, is called cloud-nuke. Don't run it in production but, if you have a dedicated testing account, that's a good place to run something like that. You can run these as a cron job and just clean up stuff every day.
Integration Tests
That's unit testing. Let's move along to integration testing. The idea with integration testing is, just because your individual units seem to be working doesn't mean that they're going to work when you put them together. That's what you want to find out with integration testing. I'll show you just one example of integration testing, and once you see it, you'll see the structures more or less identical to what we've already talked about. There's not a whole lot new to learn. The basic approach we used was more or less identical. Then, we'll talk about a few other things with parallelism, and test stages, and retries.
Here's an example from that same repo where we have two modules that we want to test and see if they work together correctly. We have one called proxy-app and we have one called web-service. I'll show you the code for those. These are using basically the exact same module, so there's nothing really new here, they're using that same Serverless app module. The only difference is, web service, instead of a plain hello world, it tries to pretend that it's some kind of a back-end web service that your company relies on and it returns a little blob of JSON instead. Then, proxy-app, very similar thing. Again, another little service application. The code that it's running will proxy a URL. You pass in the URL you wanted to proxy as an environment variable, it'll make an HTTP request to it, and then, forward along the results. You can sort of think of this as one of these is a front-end application, one of these is back-end, and you want to make sure they work together correctly.
How are we going to test these things? The first thing to note is the proxy application has an input variable which is how you tell it what URL you want it to proxy. Our web service has an output variable which is its URL. We want to proxy that URL, that's our goal. We're going to write a thing called proxy_app_test, another Go file. Here's the structure, so, hopefully, you're starting to get used to this approach. Going through it line by line, you'll see there's really nothing new here. We're going to configure our web service, and I'll show you what this is doing, but it's that same terraform.Options thing from before. We're going to run terraformInitAndApply to deploy the web service. Then, we're going to configure the proxy application passing it information from the web service, so this is really the only new thing here - we're passing information from one to the other. I'll show you these methods in just a sec. Then, we're going to run terraform apply to deploy the proxy application, we're going to validate it works. Then, at the end of the test, in defer, we're going to run terraform destroy on each of those modules. Exact same structure - apply, validate, destroy.
Looking at those methods, here's configWebService. It's just returning one of those terraform.Options structs, it says, "That's where my code lives." Here's the slightly new thing, which is configProxyApp. This thing is also returning a terraform.Options with one new thing, it's going to read in the URL output from the web service and it's going to pass it as an input variable to the proxy application. Here we're chaining one module's outputs into the inputs of another module, just by passing them along using whatever variables those modules support. The validate method is completely identical to the hello-world one, it's just doing a bunch of HTTP requests. The only difference is a looking for a blob of JSON in the response, instead of plain text.
We can run the integration test. The code for it, by the way, is right here. It's exactly as I said, config the web service, run apply, config the proxy-app, run apply, validate, and then, at the end of the test, run destroy a couple times.
I will let that test run in the background. This will take a little bit longer, and that's actually an important point. That's running in the background and it'll take a few minutes to run, all told. That's important. Integration tests in infrastructure code, as you might expect, take longer than unit tests, just like everywhere else. They can actually take a lot longer, so I'm testing these really simple hello-world lambda functions that deploy quickly. If you're deploying a database, that could take 20 minutes just by itself. These tests can take longer.
What do you do about that? There's a couple things you can do to speed things up. One is, run your test in parallel. This of course doesn't make any individual test faster but at least your whole test suite is only as slow as the slowest test, rather than everything running sequentially. That's useful because these tests can take a while. Telling tests to run in parallel, in Go, is really easy, you just add t.Parallel to the top of any test function. Then, when you run go test, all of those tests that have that will run in parallel. If you go back and look at the actual test code, in this example repo, you'll see that every test has t.Parallel as the very first line of code in the test.
There is one gotcha though, which is you could run into resource conflicts if you're not thoughtful about this. Here's what I mean by that. Your modules, whatever it is that you're testing, your infrastructure code, is creating resources. For example, here we're creating an IAM role and a security group in AWS, and those resources might have names. In this case, AWS actually requires that IAM roles and security groups, the name has to be unique. If you hard code the name into your code and you run two tests in parallel and they both try to use the same name, you're going to get a conflict and the tests will fail.
What you need to do is, you need to namespace all of your resources, in other words, provide a way to override the default name so that you can set it to something unique at test time. I'll just show you a couple real-world examples of that. If we go look at our Serverless app, that module I've been using, you can see it creates a lambda function, and the name it sets to this input variable. It does the same thing with the IAM role and basically all the other named resources, the name is configurable. Then, when we're using that code, so if we go look at our hello-world app, we set the name to var.name, which has a default, but at test time, we're going to override that default. This is the one piece that I hadn't shown you before. If you look, we pass in a name variable in our test, which we set to include this unique identifier. There's a little function in Terratest that basically generates a 6-character string and has something like 56 billion possible combinations, it's a randomized value. This gives you a pretty good chance that two names are not going to conflict. If you override all of the names and all of your test code with something that's pseudo-random in here, then you're going to avoid these resource conflicts.
What's interesting is, this isn't just useful for testing. You should actually get into the habit of namespacing resources anyway because you might want to deploy two copies of the Serverless app in a single environment or across multiple environments. Being able to namespace things is useful for production code anyway.
We do something similar for Kubernetes as well, which is Kubernetes actually has a first-class concept of namespaces. At test time, we generate a randomly named namespace and we deploy all of our code into that namespace to ensure that this does not conflict with anything else that happens to be in the same Kubernetes cluster. Namespacing is very important in general but especially for automated tests that run in parallel.
One more concept that's pretty useful to know about are test stages. If we take a look at this proxy-app integration test, there are five stages in that test. We deploy a web service, and the proxy-app, and we validate the proxy-app, and then, we undeploy it, and then, we undeploy the other thing. In the CI environment, you need to run all of these steps, that make sense, but when you're coding locally, especially when you're first writing this test, you might want to be able to iterate on just some inner portion of this thing. Maybe you're working out how to validate the app correctly and you just want to be able to rerun the validate step over and over again and you don't want to run the rest of the stuff. As the code is written initially, you don't really have a choice, and that's a problem because all those other steps have a lot of overhead. You might want to run the validate step that takes seconds, but the test will force you to pay 5 to 10 minutes of overhead for every single test run. That gets very annoying. You can work around that. Whatever test tool you're using ideally supports this idea of test stages.
Here's what it looks like. I'm not going to run this one, I'll just walk through the code really quickly in the interest of time. This was our original test structure. We're deploying the web service, the proxy-app, and validating. What we're going to do is we're going to basically wrap those in functions. It's the same thing, there's a deploy_web_service, deploy_proxy_app, and validate, but you'll see there's this new thing called stage, using that just as an alias so the code actually fits on the slide. I basically wrap all the code with this little function. All the actual deployment code moves into these named functions, and each stage has a name. You can name it whatever you want, as long as it's unique. The point of doing this is, now, if I have a stage called Foo, I can tell Terratest to skip that stage just by setting an environment variable, that's a SKIP_Foo equals whatever, you can set it to any value.
Here's how you might use this. You might run that integration test, and the very first time you run it, you tell it to skip the clean up steps. When you run the test, it's going to run deploy_web_service, deploy the proxy-app, it's going to run validate, but it's not going to clean anything up. Those services will keep running in the background.
Now you can rerun the test, you can skip the deployment steps as well. The next time you run the test, it's just going to run the validate step over and over again. That takes seconds rather than minutes. This allows you to iterate locally much faster. You can also make changes manually, you can inspect things, you can debug things, it's basically as if you're pausing the test in the middle. That's really what we're doing with just some environment variables. Then, when you're done, you can basically tell it to clean everything up again and you're done.
Test stages are very useful. The one thing you have to do to make them work, besides wrapping your coded functions, is, since we're running these tests in separate processes - we're running go test over and over again, those are separate processes - if two stages need to share data, they can't just pass it in memory, like they were doing before, because of the separate processes. Whatever data you need to pass, which is usually just like these terraform.Options things, you just need to write it to desk and read it from desk. For example, the deployWebService code will store the terraform.Options into the temp folder, and there's a helper to do that, so it's one liner. Then, the cleanupWebService code needs those terraform.Options to know what to clean up, it's going to read it from disk. That allows you to have these completely independent test stages. If you want to see the real version of that, grab that repo, and in here, there's the integration tests with stages. Here it is, here's my deploy step, another deploy step, validate. You can see, each of these is wrapped in this TestStage thing and they're all loading and saving various things to disk.
I will personally tell you this simple hack has helped me keep my sanity. Some of these tests take a really long time, and the ability to rerun pieces in seconds, rather than waiting 20 minutes, is huge. It's incredibly valuable.
One other pro tip has to do with retries. Other thing we learned from long experience is infrastructure in the real world can fail for a whole bunch of reasons - intermittent reasons. I don't mean bugs in your code, but just things like EC2 give you a bad instance or there was a brief outage somewhere or some intermittent issue of that sort. If you don't do anything about it, then your tests can become very flaky, they will basically fail for reasons that have nothing to do with actual bugs in your code.
The easiest solution for this is to add retries. You already saw that the HTTP requests in Terratest, we were doing those in a retry loop, but you can actually do retry loops all over your code and some of them are natively supported by Terratest. In that terraform.Options thing, in addition to saying where your code lives, in addition to passing variables, you can also say, "If you see an error that looks like this," this is actually a very common error you hit with Terraform, these TLS handshake timeouts are very frustrating, you can basically say, "retry up to 3 times with 3 seconds per retry." This will make your tests much more stable.>/p>
End-to-End Test
There's one more category of tests to talk about, which is end-to-end testing. The idea here, as the name implies, is to test everything together. How do you actually do that? If you have a big complicated infrastructure, how do you actually test that end-to-end? You could try to use the exact same strategy I've been showing you this whole talk, deploy everything from scratch, validate, undeploy, but that's not a very common way to do end-to-end testing. The reason for that has to do with this little test pyramid. We have static analysis, unit tests at the bottom, integration tests, end-to-end tests.
The thing about this pyramid is, as you go up the pyramid, the cost to write the test, how brittle the test is, and how long it takes to run goes up very quickly. These are some really rough numbers. Obviously, it depends on your particular use cases, but typically, static analysis runs in seconds, unit tests in a low number of minutes, integration tests take more minutes, end-to-end tests from scratch take hours. Most architectures, even if completely automated, to deploy them completely from scratch can take hours, and to test them, and then, undeploy them at the end. That's, unfortunately, too slow.
The other issue is brittleness. You can actually see this by doing a little bit of math. Assume that some resource you're deploying, EC2 instance, database, whatever it is, has a 1 in 1,000 chance of some random intermittent flaky error. I don't know if this is an exactly accurate stat but it's probably somewhere in the ballpark. You can do the math, do a little probability calculation and see what are the odds of a test failing for flaky reasons based on how much stuff you're deploying in that test. If you have a unit test and it's deploying just a handful of resources, about 10, and each one of those has a 1 in 1,000 chance of failing, then, when you have 10 of them, your chances of failure go up to 1%. If you're deploying 50 resources in an integration test, the chance that you get some kind of a flaky or intermittent error is around 5%. If you try to deploy your entire architecture which has hundreds of resources, I mean we're talking a 40%, 50% chance of just some things somewhere hitting that 1 in 1,000 chance.
You can work around 1% and 5% with just retries, that's what the retries help you overcome, but there's nothing you can do. If 40% of the time your tests are failing for flaky reasons, that's going to be very painful. Unfortunately, doing end-to-end testing from scratch tends to be just too slow and brittle, in the current world, to be useful.
The real way to do end-to-end testing is incrementally. What I mean by that is, you set up a persistent test environment. You deploy everything from scratch, which will take hours and become annoying, but you do that once and you leave it running. Then, whenever you go and update one of your modules, you roll out the changes to just that module. This is what your commit hooks are doing. They're not deploying everything from scratch, they're actually just updating an existent architecture with each change, and then, validating. Then you can run InSpec or whatever you want to validate that things are still working as expected. This will be approximately the same as unit testing or integration testing. It's not going to take that long, it'll be reasonably stable, and I'll actually give you a lot of value in seeing that your entire stack is actually working end-to-end.
As a bonus, you can test not only that the thing works after the deployment, but you can actually write a test that tests the deployment itself. For example, one very important thing is, "Is my deployment zero downtime?" Or, "Every time I roll out a Kubernetes service, do my users get 500 errors for 5 minutes?" You can actually test that, and we have a whole bunch of automated tests around exactly that. This is a really nice way to do end-to-end testing.
Conclusion
Wrapping things up, here's a overview of all the testing techniques I talked about, I'll go over them and summarize really quickly. Basically, static analysis, it's fast, it's easy-to-learn, really don't need to deploy any real resources. You should use it. The only downside is it's very limited in the kind of errors it catches, and just because my static analysis pass doesn't give me that much confidence that my code works. If you're doing nothing, at least do static analysis but don't stop there. Unit tests tend to run fast enough, they take a low number of minutes, mostly stable if you do retries, and they give you a lot of confidence that the individual building blocks that you're using work as expected. The downside is you do have to deploy real resources and you do have to write some real code. Integration tests, pretty similar. The only real difference is that they are even slower, which is a bummer, so you're going to have fewer of those. Then, end-to-end tests, similar thing, but if you do them from scratch, they're way too slow and brittle. Do them incrementally, and then, they'll have similar trade-offs to unit tests and integration tests.
Which ones should you use? Correct answer's of course, "All of them." They all catch different types of bugs and you're going to use them roughly in this proportion. That's actually why it's a pyramid. You want to have a whole bunch of unit tests and static analysis catch as many bugs as you can at that layer, then a smaller number of integration tests, and a very small number of high-value end-to-end tests.
Infrastructure code is scary when it doesn't have tests. In fact, I've heard that actually the definition of legacy code is, "Any code that doesn't have automated tests." You can fight that fear, you can build some confidence in your life by writing some automated tests.
See more presentations with transcripts
Community comments | https://www.infoq.com/presentations/automated-testing-terraform-docker-packer/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global | CC-MAIN-2022-05 | refinedweb | 8,895 | 72.05 |
"it must contain an “@” symbol
which is NEITHER the first character nor the last character in the string."
So I have this assignment in my class and I can't for the life of me figure out how to make the boolean false for having it at the front or end of the email. This is what I have so far.
def validEmail1(myString):
for i in myString:
if i == "@":
return True
else:
return False
With
s as your string, you can do:
not (s.endswith('@') or s.startswith('@')) and s.count('@')==1
Test:
def valid(s): return not (s.endswith('@') or s.startswith('@')) and s.count('@')==1 cases=('@abc','abc@','abc@def', 'abc@@def') for case in cases: print(case, valid(case))
Prints:
@abc False abc@ False abc@def True abc@@def False | https://codedump.io/share/uCOPZ5axgsDl/1/validating-where-the-quotquot-symbol-is-in-an-email-not-validation-of-the-email-itself | CC-MAIN-2017-26 | refinedweb | 134 | 93.74 |
Learn Python (Part 2)
Learn Python (Part 2)
Python Basics
Read part 1 here :- Learn Python (Part 1)
- What Is Python?
- Where Is Python Useful?
- Python Basics
- File Manipulation
- Network Communications
PYTHON BASICS
In Article 1 (Learn Python Part 1) , we looked at many of the basics of scripting. We covered loops, conditionals, functions, and more. Many of the languages we will use have similar capabilities, but syntax and execution will differ from one language to the next. In this section, we will investigate the syntactical and conceptual differences in the concepts that have already been presented, and how they apply to the Python language.
Getting started
We want to create Python files in a text editor. Text editors are a matter of personal preference. As long as the indentation is consistent, Python won’t mind. For those who do not already have an editor of choice, the Kate editor that was demonstrated in Article 1 (Learn Python Part 1) has a graphical user interface (GUI) and is simple to use. In addition to having syntax highlighting, Kate handles automatic indentation, making it easier to
avoid whitespace inconsistencies that could cause Python to fail. Python scripts are .py files. For example, hello.py might be our first script. To use Kate, try typing kate hello.py to create a simple script.
Formatting Python files
Formatting is important in Python. The Python interpreter uses whitespace indentation to determine which pieces of code are grouped together in a special waydfor example, as part of a function, loop, or class. How much space is used is not typically important, as long as it is consistent. If two spaces are used to indent the first time, two spaces should be used to indent subsequently.
Running Python files
Let’s get comfortable with Python by writing a quick and simple script. Copy the following code into a text editor and save it as hello.py:
#!/usr/bin/python
user = “<your name>”
print “Hello ” + user + “!”
Line one defines this as a Python script. This line is typically not required for scripts written in Windows, but for cross-compatibility it is acceptable to include it regardless of platform. It gives the path to the Python executable that will run our program. In line two, we assign our name to a variable called user. Next, we print
the result, joining the text together using the concatenation operator, a plus sign, to join our variable to the rest of the text. Let’s try it! We can run our script by typing python hello.py in a shell window. Linux or UNIX environments offer a second way to run Python scripts: We can make the script executable by typing chmod u+x hello.py and then ./hello.py. So now, using Back-Track, let’s make it happen! See Figure for an example of expected output from Back-Track.
If you Write that code Congratulations! You have just written your first Python script. Chances are good
that this will be the only time you write a Python script to say hello to yourself, so let’s move on to more useful concepts.
Variables
Python offers a few noteworthy types of variables: strings, integers, floating-point numbers, lists, and Dictionaries.
#!/usr/bin/python
myString = “This is a string!” # This is a string variable
myInteger = 5 # This is an integer value
myFloat = 5.5 #This is a floating-point value
myList = [ 1, 2, 3, 4, 5] #This is a list of integers
myDict = { ‘name’ : ‘Python User’, ‘value’ : 75 } #This is a dictionary
with keys representing # Name and Value
Everything after the # on a line is not interpreted by Python, but is instead considered to be a comment from the author about how a reader would interpret the information. Comments are never required, but they sure make it easier to figure out what the heck we did last night. We can create multi-line comments using three double quotes before and after the comment. Let’s look at an example.
#!/usr/bin/python
“””
This is a Python comment. We can make them multiple lines
And not have to deal with spacing
This makes it easier to make readable comment headers
“””
print “And our code still works!”
In Python, each variable type is treated like a class. If a string is assigned to a variable, the variable will contain the string in the String class and the methods and features of a String class will apply to it. To see the differences, we are going to try out some string functions in Python interactive-mode by just typing python at the command prompt. Follow along with Figure 2 by entering information after the >>> marks.
We started by creating a string called myString. Then we used the bracket operators to get the first four characters. We used [:4] to indicate that we want four characters from the beginning of the string. This is the same as using [0:4]. Next, we used the replace function to change the “o” character to the “0” character. Note
that this does not change the original string, but instead outputs a new string with the changes. Finally, we used the split method with a space delimiter to create a list out of our string. We will use this again later in the chapter when parsing input from the network.
TIP
To find out more string functions to test on your own, you can visit the Python reference manual for strings at.
Modules
Python allows for grouping of classes and code through modules. When we use a module, we will “import” it. By importing it, we gain access to the classes, class methods, and functions inside the module. Let’s explore modules more through our interactive Python session in Figure 3. Python makes finding an MD5 hash of text (say, a password, for example) very easy. Notice that Python has no idea what we are trying to do until we
import the module. But, once we do, we get the hash of our original value in hexadecimal.
TIP
The hashlib module has more hash types that can be calculated. The full list of algorithms
and methods is available at.
Arguments
So far the scripts we have created have been static in nature.We can allow arguments to be passed on the command line to make scripts that are reusable for different tasks. Two ways to do this are with ARGV and optparse. The ARGV structure is a list containing the name of the program and all the arguments that were passed to the application on the command line. This uses the sys module. The other option is the opt-parse module. This gives more options for argument handling. We’ll explore each in more detail shortly. While conducting a penetration test, there is always a chance that something we are doing may adversely affect a server. We want to make sure the service we are testing stays up while we are conducting our test. Let’s create a script using the sys module and the httplib module to do Web requests. Follow along by creating the
following file as webCheck.py and make it executable with chmod u+x webCheck.py.
#!/usr/bin/python
import httplib, sys
if len(sys.argv) < 3:
sys.exit(“Usage ” + sys.argv[0] + ” <hostname> <port>n”)
host = sys.argv[1]
port = sys.argv[2]
client = httplib.HTTPConnection(host,port)
client.request(“GET”,”/”)
resp = client.getresponse()
client.close()
if resp.status == 200:
print host + ” : OK”
sys.exit()
print host + ” : DOWN! (” + resp.status + ” , ” + resp.reason + “)”
This script shows how to import modules inside a script. It is possible to import multiple modules by separating them with a comma. Then we do some basic error checking to determine that our argument list from ARGV is at least three elements long. The name of the script you are running is always in sys.argv[0]. In this script, our
other arguments are our host and the port we want to connect. If those arguments are absent, we want to throw an error and exit the script. Python lets us do this in one line. The return code for sys.exit is assumed to be 0 (no error) unless something else is specified. In this case, we are asking it to display an error, and Python will assume it should return a code of 1 (error encountered) since we have done this.We can use any number in this function if we want to make custom error codes.
Once we have assigned our remaining list items into appropriate variables, we need to connect to our server, and request our URL. The method client. getresponse() retrieves an object which contains our response code, the reason for the code, and other methods to retrieve the body of the Web page we requested. We want to make sure the page returned a 200 message, which indicates that everything executed successfully. If we did receive a 200 message, we print that the site is okay and exit our script successfully. If we did not receive a 200 code, we want to print that the site is down and say why. Note that we did not tell sys.exit() a number here. It should assume 0 for OK. However, it’s a good practice not to assume and make a habit of always putting in a number. The resp.status will have our return code in it, and the resp.reason will explain why the return code was what it was. This will allow us to know the site is down.
Information
To be continue….. In next Article We discuss more..
You guys get part 1 Here
link:- | https://thehacktoday.com/learn-python-part-2/ | CC-MAIN-2021-04 | refinedweb | 1,614 | 75.1 |
Crash Dump Analysis
Not all bugs can be found prior to release, which means not all bugs that throw exceptions can be found before release. Fortunately, Microsoft has included in the Platform SDK a function to help developers collect information on exceptions that are discovered by users. The MiniDumpWriteDump function writes the necessary crash dump information to a file without saving the whole process space. This crash dump information file is called a minidump. This technical article provides info about how to write and use a minidump.
- Writing a Minidump
- Thread safety
- Writing a Minidump with Code
- Using Dumpchk.exe
- Analyzing a Minidump
- Summary
Writing a Minidump
The basic options for writing a minidump are as follows:
Do nothing. Windows automatically generates a minidump whenever a program throws an unhandled exception. Automatic generation of a minidump is available on Windows XP, Windows Vista, and Windows 7. If the user allows it, the minidump will be sent to Microsoft, and not to the developer, through Windows Error Reporting (WER). Developers can contact the Microsoft Windows Gaming & Graphics developer relations group to arrange for access to the WER service.
Use of WER requires:
- Developers to sign their applications using Authenticode
- Developers to obtain a VeriSign ID to access the WinQual service
- Applications have valid VERSIONINFO resource in every executable and DLL
If you implement a custom routine for unhandled exceptions, you are strongly urged to use the ReportFault function in the exception handler to also send an automated minidump to WER. The ReportFault function handles all of the issues of connecting to and sending the minidump to WER. Not sending minidumps to WER violates the requirements of Games for Windows.
For more information on how WER works, see How Windows Error Reporting Works. For an explanation of registration details, see Introducing Windows Error Reporting on MSDN's ISV Zone.
- Use a product from the Microsoft Visual Studio Team System. On the Debug menu, click Save Dump As to save a copy of a dump. Use of a locally saved dump is only an option for in-house testing and debugging.
- Add code to your project. Add the MiniDumpWriteDump function and the appropriate exception handling code to save and send a minidump directly to the developer. This article demonstrates how to implement this option. However, note that MiniDumpWriteDump does not currently work with managed code and is only available on Windows XP, Windows Vista, Windows 7.
Thread safety
MiniDumpWriteDump is part of the DBGHELP library. This library is not thread-safe, so any program that uses MiniDumpWriteDump should synchronize all threads before attempting to call MiniDumpWriteDump.
Writing a Minidump with Code
The actual implementation is straightforward. The following is a simple example of how to use MiniDumpWriteDump.
#include <dbghelp.h> #include <shellapi.h> #include <shlobj.h> int GenerateDump(EXCEPTION_POINTERS* pExceptionPointers) { BOOL bMiniDumpSuccessful; WCHAR szPath[MAX_PATH]; WCHAR szFileName[MAX_PATH]; WCHAR* szAppName = L"AppName"; WCHAR* szVersion = L, L"%s%s", szPath, szAppName ); CreateDirectory( szFileName, NULL ); StringCchPrintf( szFileName, MAX_PATH, L"())) { } }
This example demonstrates the basic usage of MiniDumpWriteDump and the minimum information necessary to call it. The name of the dump file is up to the developer; however, to avoid file name collisions, it is advisable to generate the file name from the application's name and version number, the process and thread IDs, and the date and time. This will also help to keep the minidumps grouped by application and version. It is up to the developer to decide how much information is used to differentiate minidump file names.
It should be noted that the path name in the preceding example was generated by calling the GetTempPath function to retrieve the path of the directory designated for temporary files. Use of this directory works even with least-privileged user accounts, and it also prevents the minidump from taking up hard drive space after it is no longer needed.
If you archive the product during your daily build process, also be sure to include symbols for the build so that you can debug an old version of the product, if necessary. You also need to take steps to maintain full compiler optimizations while generating symbols. This can be done by opening your project's properties in the development environment and, for the release configuration, doing the following:
- On the left side of the project's property page, click C/C++. By default, this displays General settings. On the right side of the project's property page, set Debug Information Format to Program Database (/Zi).
- On the left side of the property page, expand Linker, and then click Debugging. On the right side of the property page, set Generate Debug Info to Yes (/DEBUG).
- Click Optimization, and set References to Eliminate Unreferenced Data (/OPT:REF).
- Set Enable COMDAT Folding to Remove Redundant COMDATs (/OPT:ICF).
MSDN has more detailed information on the MINIDUMP_EXCEPTION_INFORMATION structure and the MiniDumpWriteDump function.
Using Dumpchk.exe
Dumpchk.exe is a command-line utility that can be used to verify that a dump file was created correctly. If Dumpchk.exe generates an error, then the dump file is corrupt and cannot be analyzed. For information on using Dumpchk.exe, see How to Use Dumpchk.exe to Check a Memory Dump File.
Dumpchk.exe is included on the Windows XP product CD and can be installed to System Drive\Program Files\Support Tools\ by running Setup.exe in the Support\Tools\ folder on the Windows XP product CD. You can also get the latest version of Dumpchk.exe by download and installing the debugging tools available from Windows Debugging Tools on Windows Hardware Developer Central.
Analyzing a Minidump
Opening a minidump for analysis is as easy as creating one.
To analyze a minidump
- Open Visual Studio.
- On the File menu, click Open Project.
- Set Files of type to Dump Files, navigate to the dump file, select it, and click Open.
- Run the debugger.
The debugger will create a simulated process. The simulated process will be halted at the instruction that caused the crash.
Using the Microsoft Public Symbol Server
To get the stack for driver- or system-level crashes, it might be necessary to configure Visual Studio to point to the Microsoft public symbol server.
To set a path to the Microsoft symbol server
- On the Debug menu, click Options.
- In the Options dialog box, open the Debugging node, and click Symbols.
- Make sure Search the above locations only when symbols are loaded manually is not selected, unless you want to load symbols manually when you debug.
- If you are using symbols on a remote symbol server, you can improve performance by specifying a local directory that symbols can be copied to. To do this, enter a path for Cache symbols from symbol server to this directory. To connect to the Microsoft public symbol server, you need to enable this setting. Note that if you are debugging a program on a remote computer, the cache directory refers to a directory on the remote computer.
- Click OK.
- Because you are using the Microsoft public symbol server, an End User License Agreement dialog box appears. Click Yes to accept the agreement and download symbols to your local cache.
Debugging a Minidump with WinDbg
You can also use WinDbg, a debugger that is part of the Windows Debugging Tools, to debug a minidump. WinDbg allows you to debug without having to use Visual Studio. To download Windows Debugging Tools, see Windows Debugging Tools on Windows Hardware Developer Central.
After installing Windows Debugging Tools, you must enter the symbol path in WinDbg.
To enter a symbol path in WinDbg
- On the File menu, click Symbol Path.
- In the Symbol Search Path window, enter the following:
"srv*c:\cache*;"
Using Copy-Protection Tools with Minidumps
Developers also need to be aware of how their copy-protection scheme might affect the minidump. Most copy-protection schemes have their own descramble tools, and it is up to the developer to learn how to use those tools with MiniDumpWriteDump.
Summary
The MiniDumpWriteDump function can be an extremely useful tool in collecting and solving bugs after the product has been released. Writing a custom exception handler that uses MiniDumpWriteDump allows the developer to customize the information collection and improve the debugging process. The function is flexible enough to be used in any C++-based project and should be considered part of any project's stability process.
Build date: 11/16/2013 | http://msdn.microsoft.com/pt-pt/library/ee416349.aspx | CC-MAIN-2014-35 | refinedweb | 1,398 | 54.73 |
Hi, I'm Valerio, software engineer, founder & CTO at Inspector.
I decided to write this post after responding to a support request from a developer who asked me how he can monitor his Laravel application by services and not by hostnames.
The company is working on a backend API for a mobile app. The APIs are running on a set of AWS EC2 instances managed by an Autoscaling Group.
After a thorough investigation into the reason for this request, here is the solution we found.
What is an autoscaling group?
An Autoscaling Group contains a collection of VM instances that are treated as a logical grouping for the purposes of automatic scaling and management of specific component of your system.
In Laravel is quite easy to separate your application in logical components that runs in different servers, like APIs, background Jobs workers, web app, scheduled tasks (cron), etc, so they can scale in and out indipendently based on their specific workload.
Every established cloud provider allwos you to configure Autoscaling Groups, or you can use other technologies like Kubernetes. The result is the same:
Due to the constant turnover of servers to handle the application load dynamically you could see a bit of mess in your monitoring charts. A lot of trendlines turn on and off continuously, one for each underlying server.
Grouping monitoring data by service name
It may be more clear to use a human friendly service name to represent all your servers inside the same Autoscaling Group.
Since transactions in Inspector are grouped by the hostname, we can use the Inspector library to “artificially” change it just before the transaction is sent out of your application to make your dashboard more clear and understandable.
In the boot method of the AppServiceProvider you can use beforeFlush() to add a callback that simply change the hostname of the transaction, assigning a general service name (rest-api, workers, web-app, etc) instead of the original hostname of the server.
1<?php23namespace App\Providers;45use App\Jobs\ExampleJob;6use Illuminate\Support\ServiceProvider;7use Inspector\Laravel\Facades\Inspector;89class AppServiceProvider extends ServiceProvider10{11 /**12 * Register any application services.13 *14 * @return void15 */16 public function register()17 {18 //19 }2021 /**22 * Bootstrap any application services.23 *24 * @return void25 */26 public function boot()27 {28 Inspector::beforeFlush(function ($inspector) {29 $inspector->currentTransaction()30 ->host31 // You can get the desired service_name by config, env, etc.32 ->hostname = config('app.service_name')33 });34 }35}
You can control what service name should be used in each part of your system through the deployment process by using environment variables.
The app.service_name configuration property could be setup as below:
1<?php23// config/app.php file45return [67 'service_name' => env('INSPECTOR_SERVICE_NAME', 'rest_api'),89];
Along the deployment pipeline you can inject a different "INSPECTOR_SERVICE_NAME" for each autoscaling group.
Your dashboard will chage from this:
To this:
I think it looks more clear, and it is what our customer think too 😃!
The number of servers is no longer visible, but you could gather this information from your cloud console.
Use filters to eliminate noise
Thanks to the transactions filtering it's even more easy to identify the most heaviest tasks in each Autoscaling Group, so you can optimize their performance and cut the costs due to the high number of tasks perfomed before the Autoscaling Group need to start a new server.
Conclusion
Thanks to its purely software library Inspector could be the right choice for developers that love to stay focused on coding instead of infrastructure management.
You will never have the need to install things at the server level or make complex configuration in your cloud infrastructure to monitor your application in real-time.
Inspector works with a lightweight software library that you can install in your application like any other dependencies. Try the Laravel package, it's free.
Visit our website for more details
Filed in: | https://laravel-news.com/how-to-monitor-your-laravel-application-by-services-not-by-hostnames | CC-MAIN-2022-21 | refinedweb | 649 | 50.06 |
Introduction
I’m going to talk a little on the editing features of the DataGrid. I will dive deep into a couple things but overall I just want to give you a good idea of the APIs you can work with and how to tweak a couple things. So basically I will be introducing a bit of the internal implementation that may be beneficial for you to understand, the editing commands and how to customize, and the editing events that are available.
Background on the data source and DataGrid working together
Some major updates were done in 3.5 SP1 to enable this editing scenario for the DataGrid. In particular, the IEditableCollectionView interface was added and ListCollectionView and BindingListCollectionView were updated to support this interface. Read more on IEditableCollectionView here. As a refresher, ListCollectionView is the view created for an ItemsControl when your data source implements IList such as ObservableCollection<T>. BindingListCollectionView is the view created for an ItemsControl when your data source implements IBindingList such as an ADO.net DataView.
The DataGrid uses IEditableCollectionView underneath the covers to support transactional editing as well as adding and removing data items. In the implementations of IEditableCollectionView in ListCollectionView and BindingListCollectionView, they both follow the IEditableObject pattern where the calls to IEditableCollectionView.EditItem, IEditableCollectionView.CancelEdit, and IEditableCollectionView.CommitEdit end up calling IEditableObject.BeginEdit, IEditableObject.CancelEdit, and IEditableObject.CommitEdit respectively. It is in the implementation of IEditableObject where you can provide the functionality to commit or rollback changes to the data source. For an ADO.net DataTable, you get this functionality for free as DataRowView already implements IEditableObject. When using an ObservableCollection<T> you will have to provide your own implementation for IEditableObject for T. See the MSDN documentation of IEditableObject for an example of how to implement it for a data item.
Things to keep in mind:
· DataGrid checks for IEditableCollectionView’s CanAddNew, CanCancelEdit, and CanRemove properties before executing the EditItem, CancelEdit, or CommitEdit methods. So if editing appears to not work for some reason, be sure to check that it is able to edit.
For information on how the data binding is hooked to the UI, see this post on Stock and Template Columns and Dissecting the Visual Layout.
Note about DataGrid properties related to editing
There are three properties on DataGrid to control editing/adding/deleting. These properties are:
· CanUserAddRows
· CanUserDeleteRows
· IsReadOnly (not in CTP)
They are basically self-documenting but beware of CanUserAddRows and CanUserDeleteRows as they can appear a little magical. Their values are coerced based on other properties such as DataGrid.IsReadOnly, DataGrid.IsEnabled, IEditableCollectionView.CanAddNew, and IEditableCollectionView.CanRemove. So this is another thing to watch out for when editing. If you run into a situation where you set CanUserAddRows or CanUserDeleteRows to true but it is changed to false automatically, check that the conditions below are met.
Working with editing commands
Default commands have been added to the DataGrid to support editing. These commands and their default input bindings are:
· BeginEditCommand (F2)
· CancelEditCommand (Esc)
· CommitEditCommand (Enter)
· DeleteCommand (Delete)
When each command is executed it will do some internal housekeeping and at some point it will call into its IEditableCollectionView counterpart. For example, BeginEditCommand calls into IEditableCollectionView.EditItem and CancelEditCommand calls into IEditableCollectionView.CancelItem.
DataGrid also has APIs where you can call editing commands programmatically. Not surprisingly, the APIs are BeginEdit, CancelEdit, and CommitEdit.
Adding new input gestures
Adding new input gestures is similar to any other control in WPF. The DataGrid commands are added through the CommandManager so one possible solution would be to register a new InputBinding with the CommandManager:
CommandManager.RegisterClassInputBinding(
typeof(DataGrid),
new InputBinding(DataGrid.BeginEditCommand, new KeyGesture(Key.<new key>)));
Disabling commands
You can disable any of the editing commands by attaching to the CommandManager.CanExecuteEvent, looking for the command to disable, then setting e.CanExecute accordingly. Here is an example:
_handler = new CanExecuteRoutedEventHandler(OnCanExecuteRoutedEventHandler);
EventManager.RegisterClassHandler(typeof(DataGrid), CommandManager.CanExecuteEvent, _handler);
void OnCanExecuteRoutedEventHandler(object sender, CanExecuteRoutedEventArgs e)
{
RoutedCommand routedCommand = (e.Command as RoutedCommand);
if (routedCommand != null)
{
if (routedCommand.Name == “<command name>”)
{
e.CanExecute = <some condition>;
if(!e.CanExecute)
e.Handled = true;
}
}
}
This is a relatively cumbersome way of disabling an editing command from being executed. Fortunately events were added to the DataGrid so that they can be canceled in a more direct fashion (although no direct event exists for the DeleteCommand).
Editing events on the DataGrid
These are the editing events that you can listen to and cancel the operation or modify data:
· RowEditEnding
· CellEditEnding
· BeginningEdit
· PreparingCellForEdit
· InitializingNewItem
In the first three events; RowEditEnding, CellEditEnding, and BeginningEdit, you have access to the DataGridRow and DataGridColumn that is being committed, cancelled, or edited. These events are called right before the actual operation will occur. You have the ability to cancel the operation completely by setting e.Cancel to true. RowEditEnding and CellEditEnding both have a parameter EditAction which lets you know if it is a commit or cancel action. From CellEditEnding, you also have access to the editing FrameworkElement. This gives you the ability to set or get properties on the visual itself before a cell commit or cancel.
PreparingCellForEdit is fired right after the cell has changed from a non-editing state to an editing state. In this event you have the ability to modify the contents of the cell. InitializingNewItem is called when a new item is added and in this event you have the option to set any properties on the newly created item. This event is good when you want to set initial default values on a new item.
Summary
So hopefully this will give you an idea on what APIs you have available for editing scenarios as well as some gotchas on how particular editing features work. If there is other editing questions/topics that you would like to read about please let me know. Another item that I plan to discuss in the future is how row validation will be tied into the DataGrid, so stay tuned!
PingBack from
As you might have heard, .NET Framework 3.5 SP1 and Visual Studio 2008 SP1 are out today! There are a
There have been several questions on the WPF CodePlex discussion list relating to styling rows and columns
If you derive from ObservableCollection you should make the derived type generic:
public class SomeCollection<T> : ObservableCollection<T>
otherwise IEditableCollectionView.CanAddNew will return false if the collection is empty.
orsino,
That is correct.
Can you post a source code and demo application for implementing editting feature in wpf datagrid using Entity Framework?
Thanks,
Xan
Is there a way to get the CanAddNew property to return true when ObservableCollection contains an interface object? Some magic factory attribute pointer or something?
Regards
Lee
Lee,
Unfortunately there isn’t a direct way to get an Interface object to create itself with IECV.
Hi There,
I use latest WPF Toolkit DataGrid. But setting up CanUserAddRows to true doesnt work. CanUserDeleteRows and IsReadOnly works though (to whatver value is set for them). My grid hosts a text column, a combobox column and a template column.
I have checked that IsReadonly is False and IsEnabled is True.
What am I missing here?
Thx in Advance,
Vinit.
Vinit,
Is IECV.CanAddNew true? Can you describe your data source a bit more.
I try to use the Datagrid with an ObservableCollection. I try to persist my data in RowEndEdit event but if I get persist errors a cannot handle them. I also cannot see any CommittingEdit
event as described above. How can catch persist errors correctly?
Thx Anschütz
Here again,
my actual Problem is: When I try to persist data in RowEditEnding and Errors occure I want the user to be able to react with a Dialog. A MessageBox or Window whatever. But any call like MessageBox.Show() or Window win = new Window(); win.Show() causes the DataGrid to call the RowEditEnding() Method again.
Where can I put such I Dialog call?
Thx for answer.
Anschutz,
Maybe you can make a special case that when an error occurs you cancel the commit in RowEditEnding. Then when the user fixes it in the dialog you programmatically commit the data. Also, CommittingEdit has been replaced by RowEditEnding. I need to update two of the APIs in this post.
Hi Vincent,
We are playing with the DataGrid and we run into a wall. I have noticed our problem has already been covered in the previous comments but unfortunately, the solution presented gives us big headaches elsewhere.
In order to provide extended functionality we derived a class from ObservableCollection. This was also a MUST for us because we are using XamlReader/XamlWriter for serialization and they don’t work with generic classes.
So in order to resolve IECV.CanAddNew returning false for empty collection (why??????), we must make the derived class generic, but making it generic totally screws up XamlWriter.
So we obiviously need to resort to trickery to get the best of both worlds. And some trick I have already tried are very UGLY!
Is there an easy way to make ListCollectionView behave correctly? Is this considered to be a bug in LCV? I have tried a naive approach by inheriting from LCV and forcing true on CanAddRow, but this failed miserably.
wpf.wanna.be,
I don’t know about all scenarios but some have been fixed in dev10. As far as a workaround, there aren’t any easy and straightforward ones. Maybe if you could provide some code we can take a look.
Vincent, thanks for a really quick response. As for the code, consider the following.
class MyCollection<T> : ObservableCollection<T> {…}
This works correctly in DataGrid with respect to ListCollectionView.CanAddRow. But serialization support is close to none for XamlReader/XamlWriter when it comes to generic classes. I have been trying all day to get a MarkupExtension to work but I have had limited success in reading a generic class from XAML. Writing still does not work.
On the other hand, the following non-generic class:
class MyCollection : ObservableCollection<T> {…}
…works perfectly with both XamlReader/XamlWriter but there is a MAJOR problem in that ListCollectionView.CanAddRow (wrongly) returns FALSE if a collection is empty.
As soon as I programmatically add at least one item to this collection, CanAddRow starts returning TRUE.
So, there you go. If you have any ideas, I would be grateful.
Vincent-
Sorry if this is a dumb question, but I am having a little trouble committing changes to my db with the DataGrid.
I have a simple WPF application with a LINQ to Entities data model pulling from my database.
In the DataGrid I have explicitly set the columns’ binding modes to TwoWay, and its ItemSource = myEntities.Customers.
From here I can edit on the grid, add new items, etc. and all the changes are showing in the grid, but none of the changes are shoveled back to the db.
What am I missing here to make the magic happen? It was a little clearer to me using a ListView, but the grid is definitely the tool I want to use for CRUD and less programming.
twhite @ fire . ca . gov
thank you,
-Trey
Trey,
Are you doing the save changes part with your entities, myEntities.SaveChanges?
wpf.wanna.be,
The scenario that you are describing is fixed in dev10. Unfortunately, there are no magic workarounds for it that haven’t already been described.
Hi ,
Im having requirement like this,
Im having n row and user will have save and reset button out side of datagrid.when i click on reset button ,is there any possiblity to reset all rows in datagrid which r modified.
Mahender,
Currently you can only reset one row at a time. You will have to create this custom editing functionality yourself.
wpf.wanna.be, Vinsibal,
I found that if you access the CanAddRow of ListCollectionView once before you use the collection, magically the CanUserAddRows of the DataGrid becomes true. Strange!
IEditableCollectionView ecv = new ListCollectionView(myRecordCache);
bool b = ecv.CanAddNew; // dummy access
MyGrid.DataContext = ecv;
1. DataGridColumn.SortDirection does not actually sort the column. DataGridColumn.SortDirection is used
Vincent,
In my datagrid, I’ve got an event wired up on the CellEditEnding event that executes some code to lookup if the value they entered is valid or not (comparing against Zip codes).
If the data they enter is invalid, I show a messagebox.
The problem I’m having is that the messagebox continually fires if it’s invalid (I think the focus is switching between the cell and the messagebox) and gets caught in a bad loop. Any ideas with this?
Thanks,
Chris
Chris,
You could try keeping track of the state like this:
<toolkit:DataGrid Grid.Row="1"
x:Name="MyDataGrid"
ItemsSource="{Binding Items}" AutoGenerateColumns="False"
Validation.
<toolkit:DataGrid.Columns>
<toolkit:DataGridTextColumn </toolkit:DataGrid.Columns>
</toolkit:DataGrid>
private bool _isInErrorState = false;
private void OnDataGridValidationError(object sender, ValidationErrorEventArgs e)
{
if (e.Action == ValidationErrorEventAction.Added && !_isInErrorState)
{
_isInErrorState = true;
MessageBox.Show(e.Error.ErrorContent.ToString());
_isInErrorState = false;
}
}
Chris,
MyDataGrid_Error and OnDataGridValidationError are the same thing. Forgot to update it when I was copy/pasting.
Thanks for the link to this post from the IEditableCollectionView one.
Sadly, this page doesn’t cover IECV.AddNew which was the topic of my question.
I did notice that I hadn’t implemented CanAddNew, but implementing it didn’t change my problem, i.e. that I can only add a new item on the initial empty line, as a new empty line doesn’t appear after committing that one.
I have also noticed that CommitNew doesn’t get called after committing a new item in that single new empty line, CommitEdit is called instead.
Currently, I implement the following:
public class DerivedView : ListCollectionView, IEditableCollectionView
{
public DerivedView(System.Collections.IList list)
: base(list)
{
}
private bool isAddingNew = false;
private Derived newDerived = null;
Object IEditableCollectionView.AddNew()
{
newDerived = new Derived();
// Initialize some non-user-modifiable properties here
InternalList.Add(newDerived);
isAddingNew = true;
return newDerived;
}
bool IEditableCollectionView.CanAddNew
{
get { return (!isAddingNew); }
}
bool IEditableCollectionView.IsAddingNew
{
get { return isAddingNew; }
}
void IEditableCollectionView.CommitNew()
{
if (!isAddingNew) return;
isAddingNew = false;
base.CommitNew();
}
}
Vincent-
I have a WPF application using the DataGrid from the toolkit. I have written it using an MVVM pattern architecture. Behind it all sits a DAL that persists changes to an Oracle DB.
In particular I have a window that allows users to edit a form we process for business. It has two parts – a OTM relationship between the header material and totals (the parent record), and specific incident material (child records).
This window is bound to a ViewModel of the parent record, who’s child records are exposed as an ObservableCollection<ChildRecordViewModels>.
The DataGrid is then bound to this observable collection of children.
When a user updates the record, persistence is happening for the entire form, and works flawless – it iterates through the collection and does well.
However, I am stuck trying to figure out how to ADD and DELETE records from this grid bound to an OC within a view model.
Any suggestions?
Trey,
This particular article has some good info on inserts and deletes,. Let me know if that helps.
Vincent-
Thanks for the comment back. I’ll check that out very soon. I got side-tracked (wisely for once) by a more immediate issue with my editing process.
Same context as above, here’s the simplest way to describe it:
1. Both ParentViewModel and ChildViewModel implement EditableViewModel (my base class with IEO, IDEI, and INPC).
2. The view we are looking at is a modal window bound to the ParentViewModel. I manually call BeginEdit() on it as the parent record should be considered in an edit state for the life-time of the modal window.
3. BeginEdit() in my abstract class calls a SaveProperties() method, that creates a dictionary of all the property values called _savedState. So that Cancel may restore it, and End may nullify it.
4. The top portion of the view is text box controls… simple value type properties of the parent view model (strings, integers, etc.). The bottom half is a Toolkit DataGrid bound to a property that is an ObservableCollection<ChildViewModels>.
5. Lastly there are two buttons OK and CANCEL. OK calls end it, marks the records clean, and closes the modal. CANCEL calls cancel edit and restores the _savedState dictionary values for the parent view model.
You can probably see what might be happening now… here’s the problem.
A. When a user edits a row in the grid… the same process happens. the grid inherent calls Begin, we change it, the value changes and if we do not cancel the change then… the child view model is changed and marked dirty.
B. Now when a user clicks the CANCEL button, it says "hey, either your parent or your child records are dirty, do you want to persist to db?" – and if i say no, CancelEdit is called… as it should be… restoring the parent record values, including the OC that the grid is bound to…
C. The issue is that the OC is a reference type and therefore its values within the _savedState dictionary are getting updated each time a row is edited rather than kept pristine…
Any thoughts or suggestions on how to make implement a Cancel button that will restore changes to the grid as well as the changes to its simple controls?
Sorry for the long winded explanation:
-trey
Vincent-
More on the last post… I am trying simply to ensure that the "edit state" of both the parent and child view models is for the life of the window.
My current implementation allows for the parent to function so… but the inherent functionality of the DataGrid calling End/CancelEdit() methods on a lost focus means their initial state is lost then.
Is there a way to prevent a DataGrid from calling EndEdit() or CancelEdit() ?
If so, I would simply modify my Ok and Cancel button commands to not only handle the parent’s state, but loop through the OC and an either End or Cancel each of them.
-T
Trey,
There is a way to prevent the DataGrid from calling CancelEdit(). Listen to DataGrid.RowEditEnding and you can cancel it there based on criteria you are looking for. Hope that helps.
Vincent:
I have a datagrid bound to an ObservableCollection of class X, and has CanUserAddRows set to True. Is it possible to modify a property of class X when the user adds a new row (before it is displayed to the user)?
Thank you.
This tripped me up for hours today, but the when using a collection of T, T must have a parameterless constructor or you won’t get the blank row.
Problem: after I removed a row from the ObservableCollection, my datagrid became uneditable. All settings above were correct.
Answer found: Before removing row, validation was performed and I got an error (which I didn’t see since the row is removed). Since row was removed, I had no indication that I had a validation error. | https://blogs.msdn.microsoft.com/vinsibal/2008/10/01/overview-of-the-editing-features-in-the-wpf-datagrid/ | CC-MAIN-2017-13 | refinedweb | 3,159 | 57.06 |
Unformatted text preview: oat number line has the property that the gaps widen as values
grow in magnitude. At the extreme, FLT_MAX and its nearest neighbor have a gap of 2 -23
scaled by 2 127 , which is enormous ~1031! The gap size doubles each time the exponent
goes up by one, so for value as small as 2.0, its nearest neighbor is 2 -22 away and this pair
will fail to be ApproximatelyEqual.
ApproximatelyEqual b) The compiler warns of implicit declarations for the functions a ssert , fopen, and
CVectorAlloc , and errors because N ULL is an unknown symbol. The linker will error
because it finds no function definitions for assert and CVectorAlloc.
Add the missing #include statements below and link with cvector.c or libcevmap.
#include
#include
#include
#include <assert.h>
<stdio.h>
<stdlib.h>
"cvector.h" //
//
//
// see
see
see
see macro definition for assert
prototype for fopen
#define for NULL
prototype for CVectorAlloc...
View Full Document
- Winter '08
- Cain,G
- CPU cache, const void *a, const void *b, void *ra
Click to edit the document details | https://www.coursehero.com/file/8794739/At-the-extreme-FLTMAX-and-its-nearest-neighbor-have-a-gap/ | CC-MAIN-2018-05 | refinedweb | 179 | 64.1 |
On 12/10/2009 10:51 AM, Diego Elio “Flameeyes” Pettenò wrote: > Hello, > > In a recent post on my blog [1] I ranted on about libvirt and in > particular I complained that the configuration files look like what I > call “almost XMLâ€�. The reasons why I say that are multiple, let me try > to explain some. > > In the configuration files, at least those created by virt-manager there > is no specification of what the file should be (no document type, no > namespace, and, IMHO, a too generic root element name); given that some > kind of distinction is needed for software like Emacs's nxml-mode to > know how to deal with the file, I think that's pretty bad for > interaction between different applications. While libvirt knows > perfectly well what it's dealing with, other packages might not. Might > not sound a major issue but it starts tickling my senses when this > happens. > > The configuration seem somewhat contrived in places like the disk > configuration: if the disk is file-backed it require the file attribute > to the <source> element, while it needs the dev attribute if it's a > block device; given that it's a path in both cases it would have sounded > easier on the user if a single path attribute was used. But this is > opinable. > This is something that has always bugged me as well, and is indeed a pain to deal with in virt-manager when doing things like changing cdrom media. I think we should just loosen the input restrictions, so that any path passed via the <source> properties file, dev, or dir are used, but will be dumped with the 'correct' property to maintain back compat. That said, I think the XML format is pretty straight forward. The above caveat you mention is the only real annoying thing. The one other stumbling block is that not all devices have a unique property to key off of in the XML (ex. you can have two identical <video> devices). This makes life difficult for virt-manager when removing a device, but it hasn't been a big issue in practice. > The third problem I called out for in the block is a lack of a schema > for the files; Daniel corrected me pointing out that the schemas are > distributed with the sources and installed. Sure thing, I was wrong. On > the other hand I maintain that there are problems with those schemas. > The first is that both the version distributed with 0.7.4 and the git > version as of today suffer from bug #546254 [2] (secret.rng being not > well formed) so it means nobody has even tested them as of lately; then > there is the fact that they are never referenced by the human-readable > documentation [3] which is why I didn't find it the first time around; > add also to that some contrived syntax in those schema as well that > causes trang to produce a non-valid rnc file out of them (nxml-mode uses > rnc rather than rng). > The bug you mention only exists because we don't have secret XML regression tests. Other schemas are in better shape: I'm pretty confident that virtual network and storage pool/volume schemas have near complete coverage. Domain probably has very high coverage but there are no doubt nuances of the format that aren't covered by our regression suite, and therefor may have incorrect schemas. For a long while we didn't use the XML schemas for anything so they were horrendously out of date and practically useless. That has changed within the past year but we are still playing catchup to have the schema match libvirt code reality. Putting a link to schemas on the website is also a good idea. > But I guess the one big problem with the schemas is that they don't seem > to properly encode what the human-readable documentation says, or what > virt-manager does. For instance (please follow me with selector-like > syntax), virt-manager creates /domain/os/type[ machine='pc-0.11'] in the > created XML; the same attribute seem to be documented: “There are also > two optional attributes, arch specifying the CPU architecture to > virtualization, and machine referring to the machine typeâ€�. The schema > does not seem to accept that attribute though (“element type: Relax-NG > validity error : Invalid attribute machine for element typeâ€� with > xmllint, just to make sure that it's not a bug in any other piece of > software, this is Daniel's libxml2). > If there are any other pieces of the schema that you find are incorrect or don't match reality, please enumerate them here and I'll take some time to make sure they are tested in our regression suite. Thanks, Cole | https://www.redhat.com/archives/libvir-list/2009-December/msg00219.html | CC-MAIN-2014-15 | refinedweb | 800 | 63.63 |
0
Hello !
I'm trying to do a similar thing as.
I managed to have Matlab launch, and run my Matlab file as the GUI of my matlab function is shown on screen. However it only stays on screen for 1 second, and Matlab closes !
import subprocess as sp command = """/Applications/MATLAB_R2012b.app/bin/matlab -r "cd(fullfile('/Users/Jules/Dropbox/CODES/Matlab/')), coverGUI_AL_FOV" """ sh=sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, shell=True) print sh.communicate()
By the way, I looked at but I'm not getting what all the stdin/out/err are, so I just put these in my code with no real reason..
If anyone has an idea why the process seems to quit, that would help me a lot !
Thanks | https://www.daniweb.com/programming/software-development/threads/459726/calling-matlab-from-python-subprocess | CC-MAIN-2018-34 | refinedweb | 130 | 66.64 |
iminuit is a Jupyter-friendly Python interface for the _Minuit2_ C++ library maintained by CERN's ROOT team.
iminuit is a Jupyter-friendly Python interface for the Minuit2 C++ library maintained by CERN's ROOT team.
It can be used as a general robust function minimisation method, but is most commonly used for likelihood fits of models to data, and to get model parameter error estimates from likelihood profile analysis.
from iminuit import Minuit def fcn(x, y, z): return (x - 2) ** 2 + (y - 3) ** 2 + (z - 4) ** 2 fcn.errordef = Minuit.LEAST_SQUARES m = Minuit(fcn, x=0, y=0, z=0) m.migrad() ## run optimiser print(m.values) ## x: 2, y: 3, z: 4 m.hesse() ## run covariance estimator print(m.errors) ## x: 1, y: 1, z: 1
The current 2.x series has introduced breaking interfaces changes with respect to the 1.x series.
All interface changes are documented in the changelog with recommendations how to upgrade. To keep existing scripts running, pin your major iminuit version to <2, i.e.
pip install 'iminuit<2' installs the 1.x series.
optimization jupyter notebooks minuit2 cplusplus
Rodrigo Senra - Jupyter Notebooks - Olá pessoal e sejam bem-vindos à mais um episódio do Castálio Podcast! …
The Jupyter Notebook is also an incredibly powerful tool for interactively developing and presenting data science projects. This tutorial will walk you through how to set up Jupyter Notebooks on your local machine and how to start using it to do data science projects. As a web application in which you can create and share documents that contain live code, equations, visualizations as well as text, the Jupyter Notebook is one of the ideal tools to help you to gain the data science skills you need.So, don't forget to join us at 7:00 PM IST for the Great Learning experience!
In this Python tutorial, we’re going to explore Jupyter Notebooks and discuss their benefits and how to get started. What is Jupyter Notebook? Why use Jupyter Notebook? Get started with Jupyter. Create a new notebook. Add content to your notebook. Share your notebook
Why switch to JupyterLab from jupyter-notebook? Jupyter Notebook is a web-based interactive computational environment for creating Jupyter notebook documents.
In this blog, I want to share how you can turn Jupyter Notebooks into pdf format in a few lines! Instead of sharing your Jupyter Notebooks, it would be neater if you could convert the notebooks and submit the pdf version | https://morioh.com/p/8e98fdadafca | CC-MAIN-2021-31 | refinedweb | 413 | 57.57 |
Sometimes it is necessary to read a sequence of images from a standard video file, such as .avi and .mov files.
In a scientific context, it is usually better to avoid these formats in favor of a simple directory of images or a multi-dimensional TIF. Video formats are more difficult to read piecemeal, typically do not support random frame access or research-minded meta data, and use lossy compression if not carefully configured. But video files are in widespread use, and they are easy to share, so it is convenient to be equipped to read and write them when necessary.
Tools for reading video files vary in their ease of installation and use, their disk and memory usage, and their cross-platform compatibility. This is a practical guide.
For a one-off solution, the simplest, surest route is to convert the video to a collection of sequentially-numbered image files, often called an image sequence. Then the images files can be read into an ImageCollection by skimage.io.imread_collection. Converting the video to frames can be done easily in ImageJ, a cross-platform, GUI-based program from the bio-imaging community, or FFmpeg, a powerful command-line utility for manipulating video files.
In FFmpeg, the following command generates an image file from each frame in a video. The files are numbered with five digits, padded on the left with zeros.
ffmpeg -i "video.mov" -f image2 "video-frame%05d.png"
More information is available in an FFmpeg tutorial on image sequences.
Generating an image sequence has disadvantages: they can be large and unwieldy, and generating them can take some time. It is generally preferable to work directly with the original video file. For a more direct solution, we need to execute FFmpeg or LibAV from Python to read frames from the video. FFmpeg and LibAV are two large open-source projects that decode video from the sprawling variety of formats used in the wild. There are several ways to use them from Python. Each, unfortunately, has some disadvantages.
PyAV uses FFmpeg’s (or LibAV’s) libraries to read image data directly from the video file. It invokes them using Cython bindings, so it is very fast.
import av v = av.open('path/to/video.mov')
PyAV’s API reflects the way frames are stored in a video file.
for packet in container.demux(): for frame in packet.decode(): if frame.type == 'video': img = frame.to_image() # PIL/Pillow image arr = np.asarray(img) # numpy array # Do something!
The Video class in PIMS invokes PyAV and adds additional functionality to solve a common problem in scientific applications, accessing a video by frame number. Video file formats are designed to be searched in an approximate way, by time, and they do not support an efficient means of seeking a specific frame number. PIMS adds this missing functionality by decoding (but not reading) the entire video at and producing an internal table of contents that supports indexing by frame.
import pims v = pims.Video('path/to/video.mov') v[-1] # a 2D numpy array representing the last frame
Moviepy invokes FFmpeg through a subprocess, pipes the decoded video from FFmpeg into RAM, and reads it out. This approach is straightforward, but it can be brittle, and it’s not workable for large videos that exceed available RAM. It works on all platforms if FFmpeg is installed.
Since it does not link to FFmpeg’s underlying libraries, it is easier to install but about half as fast.
from moviepy.editor import VideoFileClip myclip = VideoFileClip("some_video.avi")
Imageio takes the same approach as MoviePy. It supports a wide range of other image file formats as well.
import imageio filename = '/tmp/file.mp4' vid = imageio.get_reader(filename, 'ffmpeg') for num, image in vid.iter_data(): print(image.mean()) metadata = vid.get_meta_data()
Finally, another solution is the VideoReader class in OpenCV, which has bindings to FFmpeg. If you need OpenCV for other reasons, then this may be the best approach. | https://scikit-image.org/docs/dev/user_guide/video.html | CC-MAIN-2019-18 | refinedweb | 662 | 58.28 |
Closed Bug 1474888 Opened Last year Closed Last year
Fix Prompt
.java's Alert Dialog
Categories
(Firefox for Android :: General, defect)
Tracking
()
Firefox 63
People
(Reporter: petru, Assigned: petru)
References
Details
Attachments
(1 file)
Fennec is set to handle the configuration changes itself. As such, the AlertDialogs is kept across screen rotations. At least in the case of the Dialog from Prompt.java [1] this is problematic because the old layout is kept which is not suitable for the new configuration. As reported in bug 1460072 comment #0 and easily seen here - This bug exists also in Chrome [2] and Focus [3]. - The easiest solution would be to just dismiss the dialog in the case of a screen rotation and let the user start it again. This would be the most detrimental to the UX I think. - Next easiest solution would be to restart that same dialog in the case of a screen rotation. This would automatically create a new dialog which means the user would have lost his input until then. - The best solution for the user would be to save the state of the currently displayed View and restore it after the re-creation of the same View. But because the AlertDialog in question uses views from different platform widgets with different state and no easy way to save and return their state to us this would be an intricate process, prone to bugs. [1] [2] [3]
NI-in Bram to ask for his opinion from a UX perspective.
Flags: needinfo?(bram)
Assignee: nobody → petru.lingurar
Status: NEW → ASSIGNED
Hi Petru, the best solution is to use a dialogue that was designed with a responsive layout, so that it can flexibly reposition based on the user’s current screen rotation without changing or refreshing its state. However, it sounds like this is impossible to do? Or at least, this is highly dependent on whether the host that provides the dialogue has designed a responsive layout for it – and most hosts haven’t. The next best solution is to save the dialogue state, recreate the view, then restore the state. But you mentioned that this is intricate. With all those constraints, I would recommend going with option 2: recreating the view without saving the state. This is bound to cause annoyance, but we hope that most users would have picked a selection and hit “Okay” before rotating the screen.
Flags: needinfo?(bram)
Was looking into saving state and restoring it after a screen rotation but there are a few possible inputs for which I can't find how to test to make sure all will continue working. According to the list of possible inputs [1] I need help with finding how to test the following inputs: - menulist - label - icongrid - tabs [1]
A tip - you can restrict the DXR search by using a filter like "path:mobile*js", which makes finding relevant results easier. But in any case what I found was: - menulist seems to be a dropdown list and is used by the WebRTC permissions dialogue [1], try - label is used for viewing certificate details [2]: Try and then click on "View" - icongrid is used by the helper app prompts [3] you get when a file download can be handled by other apps as well, or when the Android icon in the URL bar appears for more than one app that can handle the current page - context menus internally seem to use prompts as well and use tabs [4], try long-pressing an image link to see that in action [1] [2] [3] [4]
Thanks Jan! Managed to test each PromptInput and I think the above patch should handle the screen rotation as best as possible. A minor glitch could be seen now and then but being able to keep the user input should make up for that. There are a few PromptInput concretizations that do not need to save their current input, as a choice will be made and sent when the user clicks a certain option: TabInput, IconGridInput and LabelInput. The rest of the PromptInputs, upon screen rotation, will save their current input and restore it in the newly created widget for the new AlertDialog.
Comment on attachment 8994227 [details] Bug 1474888 - Fix Prompt.java's AlertDialog; ::: mobile/android/base/java/org/mozilla/gecko/prompts/ColorPickerInput.java:27 (Diff revision 1) > String init = obj.getString("value"); > - mInitialColor = Color.rgb(Integer.parseInt(init.substring(1, 3), 16), > + mInitialColor = getColorCode(init); Combine the two lines ::: mobile/android/base/java/org/mozilla/gecko/prompts/PromptService.java:18 (Diff revision 1) > -import android.util.Log; > > public class PromptService implements BundleEventListener { > private static final String LOGTAG = "GeckoPromptService"; > > - private final Context mContext; > + private final Context context; We prefer `m` prefixes but not big deal
Attachment #8994227 - Flags: review?(nchen) → review+
Pushed by archaeopteryx@coole-files.de: Fix Prompt.java's AlertDialog; r=jchen
Status: ASSIGNED → RESOLVED
Closed: Last year
status-firefox63: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 63
Flags: qe-verify+
Verified as fixed on Nightly 63. The date/time picker looks better on rotation, also verified the other input types are ok. Huawei Nexus 6P (Android 8.1.0) Google Pixel (Android P Beta) HTC Desire 820 (Android 6.0.1) Samsung Galaxy Tab 3 (Android 8.0)
status-firefox61: --- → affected
status-firefox62: --- → affected
Flags: qe-verify+
Is this something we should consider for Beta backport?
Flags: needinfo?(petru.lingurar)
Comment on attachment 8994227 [details] Bug 1474888 - Fix Prompt.java's AlertDialog; Approval Request Comment [Feature/Bug causing the regression]: The original layout for input pickers is maintained across rotations. [User impact if declined]: Layout may not fit well on the new configuration after screen rotation. 994227 - Flags: approval-mozilla-beta?
Comment on attachment 8994227 [details] Bug 1474888 - Fix Prompt.java's AlertDialog; One of several small fixes for the date picker related to the switch in 62 to using the platform library. Verified in nightly. Let's uplift for next Monday's beta 17 build.
Attachment #8994227 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
Flags: qe-verify+
Can you please check this, Levente?
Flags: needinfo?(levente.sacal)
Verified as fixed in 62 Beta 17. Tested the date/time picker on on landscape as well. Sony Xperia Z5 (Android 7.0) Samsung Galaxy S8 (Android 8.0.0)
Flags: needinfo?(levente.sacal)
Status: RESOLVED → VERIFIED
Flags: qe-verify+ | https://bugzilla.mozilla.org/show_bug.cgi?id=1474888 | CC-MAIN-2019-43 | refinedweb | 1,056 | 54.83 |
Arduino boards have changed the face of DIY technology. Simple projects like creating miniature Arduino traffic lights are perfect for teaching beginners about basic electronics and programming.
Arduinos are perfect for home projects, and can be used on the move by attaching a battery pack to them. The problem is, even the chunkiest battery will be run down quickly by even a small Arduino board.
If you truly want your Arduino to run for the long haul, you are going to need to make some tweaks and changes.
1. Arduino Low-Power Software Libraries
There are several software libraries available that can change your Arduino’s power consumption. By sending the Arduino into a deep sleep for a set amount of time, power can be saved between operations. This is particularly useful for microcontrollers taking sensor readings in remote areas such as weather stations, or sensing sub-circuits for larger devices.
The Low-Power library by Github user rocketscream is an example of an easy to use library which is perfect for saving some power. Consider the following code, based on some of the library’s example code:
#include "LowPower.h" // setup() your sensors/LEDs here void loop() { // This next line powers the arduino down for 8 seconds //ADC means analogue to digital conversion, and BOD for brown out detection //both are turned off during the sleep period to save power LowPower.powerDown(SLEEP_8S, ADC_OFF, BOD_OFF); //After each sleep, you can instruct the Arduino to carry out its tasks here - for example, take a temperature reading and send it to a server. }
This code is a good start. Not only does it drop the power consumption using already built in methods, it turns off the potentially costly analogue to digital conversion (which can use power even when idle) and the brown out detection which stops the Arduino running code when its input voltage gets too low.
This is a simple but effective measure to start cutting down how much power your Arduino pulls. We can go much deeper than this though!
2. Arduino Built-In Power Saving
The Arduino programming language has its own built-in sleep functionality designed to help with power saving. The sleep function, used in tandem with interrupt clauses, allow the Arduino to wake up again.
Arduino has specific pins designed for interrupting the sleep cycle, and you can access them using the setup function:
#define interruptPin 2 void setup() { //interrupt pin MUST be Arduino pin 2 or 3 on Uno //set the pin to pull up mode pinMode(interruptPin, INPUT_PULLUP); }
Now that this is set up as an interrupt pin, you can safely go about sending your Arduino to sleep. A simplified way of doing this is to create two small functions:
void sendToSleep() { //enable sleeping - note this primes sleep, not starts it! sleep_enable(); //attach the interrupt, specify the pin, the method to call on interrupt, //and the interrupt conditions, in this case when the pin is pulled low. attachInterrupt(interruptPin, wakeUpAgain, LOW); //actually activate sleep mode sleep_cpu(); //code continues on from here after interrupt Serial.println("Just awoke."); } void wakeUpAgain() { //stop sleep mode sleep_disable(); //clear the interrupt detachInterrupt(interrputPin); }
The code above is a simplified way to send your Arduino into sleep mode, and you can wake it up again by connecting pin 2 to the GND pin. While the Arduino Uno is in sleep mode it shaves around 11mA off the total power draw, and if you use a Pro Mini instead you can expect to drop from 25mA regular power usage to just 0.57mA.
Interrupts are a great way to bring your power consumption down, and The Kurks blog has some detailed posts about them, which help to demystify interrupts for beginners.
3. Slow Down the Arduino Clock Speed
The clock speed of your Arduino determines how many operations it can perform per second. Most Arduino boards run on a 8 or 16 MHz processor, though some of the offshoot boards such as the Teensy 3.6 boast processing speeds of up to 180MHz! This is why many DIY hackers like to use Teensy boards 7 Cool Projects You Can Build With a Teensy 7 Cool Projects You Can Build With a Teensy Teensy is a great alternative to Arduino, and these awesome projects show just what you can do with a Teensy! Read More over Arduino in their DIY projects.
All of this processing power comes at a power cost, and for many use cases employing the full clock speed is overkill. This is where regulating the clock speed through software can make a difference.
It would be remiss of me not to warn you, changing the clock speed can cause bootloader issues and may leave you with an Arduino you cannot upload sketches to, if done incorrectly.
If you do want to try changing your clock speed, along with making tools in the Arduino IDE to allow you to change CPU frequency on the fly, Pieter P’s detailed guide can help you get started.
4. Replace Power-Hungry Arduino Components
The Arduino Uno is the most popular board for beginners, and most Arduino kits supply either an official or clone model. Its larger form factor and hot swappable microchips make it perfect for experimentation, and its broad capacity for input voltages along with onboard voltage conversion for 3.3v components make it fit for almost every purpose.
All of this functionality doesn’t come cheap in terms of power usage. With this in mind, there are a number of things you can do to physically alter an Arduino Uno to save power.
The voltage regulator on the Arduino Uno causes the largest single power drain on the board. This isn’t particularly surprising, as it has to drop up to 7v safely from the input power supply to the board itself. Some have tried to get around this by replacing the regulator with more efficient ones, but this doesn’t really solve the issue.
Patrick Fenner of DefProc engineering came up with a great solution in his blog post covering Uno power saving strategies. By replacing the voltage regulator entirely with a DC-DC buck converter, he managed to half the power consumption of the microcontroller.
5. Make Your Own Arduino
A sure-fire way to only use the power needed for your project is to design a microcontroller to your own specifications. In the past we’ve shown how you can build your own Arduino for a fraction of the cost of an official board.
As well as having much more control over the size and scope of your circuit, this can bring power consumption down to 15.15mA in standby, and as little as 0.36mA in sleep mode. These figures are taken from the incredibly detailed post by Nick Gammon on his forum.
This post covers many other aspects of Arduino power saving, and is a fantastic resource to refer to when trying to squeeze a little more time out of a mobile power supply.
Use Arduino for Big Ideas and a Small Power Footprint
When you are working on your first beginner Arduino projects, power consumption probably isn’t too much of a concern.
As your ideas get bigger and require more thought, it is well worth learning ways to streamline your set up. Between making sure you get the right Arduino board and setting it up to get the most out of it, you can go a long way to making truly unique and useful devices. Good luck and keep tinkering!
Explore more about: Arduino, Battery Life, Hardware Tips. | https://www.makeuseof.com/tag/arduino-power-saving-tips/ | CC-MAIN-2018-30 | refinedweb | 1,266 | 57.5 |
I have a question about importing TIP problems. The file
tip2015/list_return_2.smt2 looks like this:
; List monad laws (declare-datatypes (a) ((list (nil) (cons (head a) (tail (list a)))))) (define-fun (par (a) (return ((x a)) (list a) (cons x (_ nil a))))) (define-fun-rec (par (a) (++ ((x (list a)) (y (list a))) (list a) (match x (case nil y) (case (cons z xs) (cons z (++ xs y))))))) (define-fun-rec (par (a b) (>>= ((x (list a)) (y (=> a (list b)))) (list b) (match x (case nil (_ nil b)) (case (cons z xs) (++ (@ y z) (>>= xs y))))))) (prove (par (a) (forall ((xs (list a))) (= (>>= xs (lambda ((x a)) (return x))) xs))))
and after
fixupAndLoad becomes:
∀x0 ∀x1 head(cons(x0, x1)) = x0, ∀x0 ∀x1 tail(cons(x0, x1)) = x1, ∀x return(x) = cons(x, nil), lam2 = lam, ∀y ++(nil, y) = y, ∀y ∀z ∀xs ++(cons(z, xs), y) = cons(z, ++(xs, y)), ∀y >>=(nil, y) = nil, ∀y ∀z ∀xs >>=(cons(z, xs), y) = ++(apply1(y, z), >>=(xs, y)), ∀y0 ∀y1 nil != cons(y0, y1), ∀x apply1(lam, x) = return(x) ⊢ ∀xs >>=(xs, lam2) = xs
apply1must be introduced by the
tiptool, but it does not show up in
TipProblem.functionswhich I am using to generate reduction rules for testing.
Here's an update:
mode analytic_independent 43 analytic_sequential 75 spind 34 treegrammar 16
All the problems solved by the analytic independent mode are affected by the tip importer thing mentioned above which makes my testing of conjectures less efficient or straight up breaks it. Three of them are solved if I just disable testing.
Spind is currently a little slower in total on the problems they have in common:
spind 118,666ms analytic_independent 93,527ms
object Example extends scala.App { implicit var ctx = Context() ctx += hoc"P{?a}: ?a > o" ctx += Ti ctx += hoc"c:i" val t1 = le"P(x:?a)" val t2 = le"P(c)" val Some( subst ) = syntacticMGU( t1, t2 ) require( subst( t1 ) == subst( t2 ) ) } | https://gitter.im/gapt/gapt?at=5cc043b08446a6023e6a837d | CC-MAIN-2021-10 | refinedweb | 329 | 61.29 |
Hello people,
If you need to handle HTTP packets in Scapy, check out my scapy-http library.
To install it, run
sudo pip install scapy-http
After you import it in your program like so
from scapy.layers import http
You’ll be able to have a pretty...Read more »
I faced quite a few times the problem of copying files to virtual machines started via Vagrant. Since many people were facing the same issue, I’ve written a quick Vagrant plugin to solve this problem.
Install it via:
Read more »Read more »
vagrant plugin install vagrant...
In my research lab at UCSB, we routinely run large-scale experiments that, essentially, need lots of computation and memory. These are pretty cool to run, but setting them up can be quite cumbersome. To make this task easier on us, we have been running...Read more »
Today, let’s challenge ourselves with Codility’s Iota 2011 challenge. The details of the challenge are in the link (I can’t copy them here because of copyright :) ).
Essentially, we are playing something very similar to the board game Dominoes. We...Read more »
Today’s challenge: given an array, find all combinations of three numbers that sum up to X. Target complexity: O(N2)
Input: array =
[2, 3, 1, -2, -1, 0, 2, -3, 0], X =
0
Expected output:
[(-3, 0, 3)] * 3 + [(-3, 1, 2)] * 2 + [(-2, -1, 3... | http://www.lucainvernizzi.net/blog/page/2/ | CC-MAIN-2017-30 | refinedweb | 235 | 72.36 |
Tech Tips index
May 21, 1998
This issue presents tips, techniques, and sample code for the following topics:
Temporary Files
In programming applications you often need to use temporary files --
files that are created during program execution to hold transient information.
A typical case is a language compiler that uses several passes (such as
preprocessing or assembly) with temporary files used to hold the output
of the previous pass. In some cases, you could use memory instead of disk
files, but you can't always assume that the required amount of memory will
be available.
One feature in JDK 1.2 is the ability to
create temporary files. These files are created in a specified directory
or in the default system temporary directory (such as C:\TEMP on Windows
systems). The temporary name is something like the following:
t:\tmp\tmp-21885.tmp
The same name is not returned twice during the lifetime of the Java1 virtual
machine. The returned temporary file is in a File object and can be used
like any other file. Note: With Unix, you may find that your input file
has to also reside in the same file system where the temporary files are
stored. The renameTo method cannot rename files across file systems.
Here is an example of using temporary files to convert an input file to
upper case:
import java.io.*;
public class upper {
public static void main(String args[])
{
// check command-line argument
if (args.length != 1) {
System.err.println("usage: upper file");
System.exit(1);
}
String in_file = args[0];
try {
// create temporary and mark "delete on exit"
File tmpf = File.createTempFile("tmp");
tmpf.deleteOnExit();
System.err.println("temp file = " + tmpf);
// copy to temporary file,
// converting to upper case
File inf = new File(in_file);
FileReader fr = new FileReader(in_file);
BufferedReader br = new BufferedReader(fr);
FileWriter fw =
new FileWriter(tmpf.getPath());
BufferedWriter bw =
new BufferedWriter(fw);
String s = null;
while ((s = br.readLine()) != null) {
s = s.toUpperCase();
bw.write(s, 0, s.length());
bw.newLine();
}
br.close();
bw.close();
// rename temporary file back to original file
if (!inf.delete() || !tmpf.renameTo(inf))
System.err.println("rename failed");
}
catch (IOException e) {
System.err.println(e);
}
}
}
The input file is copied to the temporary file, and the file contents are
converted to upper case. The temporary file is then renamed back to the
input file.
JDK 1.2 also provides a mechanism whereby files can be marked for "delete
on exit." That is, when the Java virtual machine exits, the file is
deleted. An aspect worth noting in the above program is that this feature
handles the case where the temporary file is created, and then an error
occurs (for example, the input file does not exist). The delete-on-exit
feature guarantees that the temporary file is deleted in the case of
abnormal program termination.
Resource Bundles
One." A resource bundle contains locale-specific objects, for example
strings representing messages to be displayed in your application. The
idea is to load a specific bundle of resources, based on a particular
locale.
To show how this mechanism works, here's a short example that retrieves and
displays the phrase for "good morning" in two different languages:
# German greeting file (greet_de.properties)
morn=Guten Morgen
# English greeting file (greet_en.properties)
morn=Good morning
The above lines make up two text files, greet_de.properties and
greet_en.properties. These are simple resource bundles.
The following program accesses the resource bundles:
import java.util.*;
public class bundle {
public static String getGreet(String f,
String key, Locale lc)
{
String s = null;
try {
ResourceBundle rb =
ResourceBundle.getBundle(f, lc);
s = rb.getString(key);
}
catch (MissingResourceException e) {
s = null;
}
return s;
}
public static void main(String args[])
{
String fn = "greet";
String mornkey = "morn";
Locale ger = Locale.GERMAN;
Locale eng = Locale.ENGLISH;
System.out.println("German locale = " + ger);
System.out.println("English locale = " + eng);
System.out.println(getGreet(fn, mornkey, ger));
System.out.println(getGreet(fn, mornkey, eng));
}
}
The idea is that ResourceBundle.getBundle looks up a particular bundle,
based on the locale name ("de" or "en"). The bundles
in this example are property files (see java.util.Properties), with
"key=value" pairs in them, and the files are located in the
current directory. A particular bundle is retrieved based on the locale,
and then a specific key is looked up, and the corresponding value returned.
Note that there are a number of additional aspects to resource bundle naming
and lookup that you should acquaint yourself with if you're concerned with
internationalization issues. Resource bundles are commonly used to represent
a collection of message strings, but other types of entities, such as icons,
can also be stored in bundles.
The output of the program is:
German locale = de
English locale = en
Guten Morgen
Good morning
Finally, if you program your application's message display features in
terms of locales and resource bundles, as this example illustrates, then
you have taken an important step toward internationalizing your program.
_______
1 As used on this web site, the terms
"Java virtual machine" or "JVM" mean a virtual machine for the Java platform. | http://java.sun.com/developer/TechTips/1998/tt0521.html | crawl-002 | refinedweb | 848 | 56.55 |
Python zipfile module is used to read and write the zip files. Here we will see how to create a zip file in python using
zipfile module.
How to create Zip file in python:
Usecase: Choose a directory and make the zip file out of it.
import zipfile import os def prepare_zip(dir_path): new_file = dir_path + '.zip' # creating zip file with write mode zip = zipfile.ZipFile(new_file, 'w', zipfile.ZIP_DEFLATED) # Walk through the files in a directory for dir_path, dir_names, files in os.walk(dir_path): f_path = dir_path.replace(dir_path, '') f_path = f_path and f_path + os.sep # Writing each file into the zip for file in files: zip.write(os.path.join(dir_path, file), f_path + file) zip.close() print("File Created successfully..") return new_file if __name__ == '__main__': prepare_zip('C:\\Users\\Lenovo\\Documents\\Data')
ZipFile() creates a new zip file with write permissions by taking the path parameter. Walkthrough each file and folders into the given directory and write them into the zip file.
Output:
File Created successfully..
You can see the created zip in the given directory.
How to unzip/extract Zip file in python:
The same
data.zip file I am going to extract here.
zipfile.extractall() function extracts all members from the archive to the current working directory.
Unzip the file in current working directory:
def extract_zip(dir_path): with zipfile.ZipFile(dir_path+'data.zip', 'r') as zip_inst: zip_inst.extractall()
The above code extracts the content of the data.zip file in current working directory.
Unzip the file in a different location:
If you wish to extract the content in a different location, you have to supply that path to the
extractall(path) function.
def extract_zip(dir_path): with zipfile.ZipFile(dir_path+'data.zip', 'r') as zip_inst: zip_inst.extractall(dir_path)
Then you can see the extracted content in the given path.
References:
Happy Learning 🙂 | https://www.onlinetutorialspoint.com/python/python-how-to-create-zip-file-in-python.html | CC-MAIN-2021-31 | refinedweb | 301 | 53.47 |
This article includes the full source code for the HTML5 ImageMap Editor I created that allows you to create an image map from an existing image that can easily be used with the JQuery plugin ImageMapster. In addition, you can also create a Fabric canvas that functions exactly like an image map but with far more features than any image map. I will be updating the source code from time to time with new web tools and features and you can download the updated source code (FREE) from my personal website filled with FREE Source Code at: Software-rus.com.
I recently had a client who wanted me to create an HTML5 Virtual Home Designer website with images of homes that users could "color in" like in those crayon coloring books where you have an image with outlines of parts of the image and you paint within the outlines. But in this case of painting parts of a home like the roof or stonefront you would also want to fill in an outlined areas with patterns where ecah pattern can be different colors. The obvious choice initially was to use image maps of a houses where the user could select different colors and patterns for each area of the image map of a home like the roof, gables, siding, etc. And the obvious choice was to use the popular JQuery plugin for image maps, i.e., Imagemapster. See
But I still needed a way to create the html <map> coordinates for the image maps of the houses where the syntax would work with the ImageMapster plugin. I didn't like any of the image map editors like Adobe Dreamweaver's Hot Spot Drawing Tools or any of the other editors because they didn't really meet my needs either. So I decided to write my own Image Map Editor which is the editor included in the article.
To create my Image Map Editor I decided to use Fabric.js, a powerful, open-source JavaScript library by Juriy Zaytsev, aka "kangax," with many other contributors over the years. It is licensed under the MIT license. The "all.js" file in the sample project is the actual "Fabric.js" library. Fabric seemed like a logical choice to build an Image Map Editor because you can easily create and populate objects on a canvas like simple geometrical shapes — rectangles, circles, ellipses, polygons, or more complex shapes consisting of hundreds or thousands of simple paths. You can then scale, move, and rotate these objects with the mouse; modify their properties — color, transparency, z-index, etc. It also includes a SVG-to-canvas parser.
In addition to Fabric I wanted a simple toolbar for my controls so I included the Bootstrap library for my toolbar, buttons, and dropdowns. Some of the libraries I used include:
In HTML and XHTML,.
For working with the ImageMapster plugin we have the following attributes:
mapKey: An attribute identifying each imagemap area. This refers to an attribute on the area tags that will be used to group them logically. Any areas containing the same mapKey will be considered part of a group, and rendered together when any of these areas is activated.. ImageMapster will work with any attribute you identify as a key. To maintain HTML compliance, I appended "data-" to the front of whatever value you assign to mapKey when I generate the html for the <map> coordinates, i.e., "data-mapkey." Doing this makes names legal as HTML5 document types. For example, you could set mapValue: 'statename' to an image map of the united states, and add an attribute to your areas that provided the full name of each state, e.g. data-statename="Alaska", or in the case of image maps of homes you might have, e.g. data-home1=mapValue, where the mapValue might equal = "roof", or "siding", etc. for a house or "state" for a map.
mapValue: An area name or id to reference that given area of a map. For example, the following code defines a rectangular area (9,372,66,397) that is part of the "roof" of a house:
//data-home1="roof"</span>
<img src="someimage.png" alt="image alternative text" usemap="#mapname" />
<map name="mapname">
<area shape="rect" <span class="style2">
</map>
The purpose of this editor is to create the html for an image map when the user selects "Show Image Map Html." To do this I decided to use the underscore library that allowed me to easily create the syntax for the html <map></map>. Keep in mind that our goal here is to create the html code that we can copy and paste into our website that will work with the ImageMapster plugin. First I created a template, i.e., "map_template," for the format of the <map></map> html using underscore as follows:
<script type="text/underscoreTemplate" id="map_template";>
<map name="mapid" id="mapid">
<% for(var i=0; i<areas.length; i++) { var a=areas[i]; %><area shape="<%= a.shape %>"
<%= "data-"+mapKey %>="<%= a.mapValue %;>" coords="<%= a.coords %>" href="<%= a.link %>" alt="<%= a.alt %>" />
<% } %></map>
</script>
I added the following methods to this editor but the ONLY method you need to create an image map is "Show Image Map Html":
Let's look at two ways of serializing the fabric canvas. The first is to use underscore and to script a custom data template that you load with the properties of teh canvas elements. The second way is to use JSON.stringify(camvas). Let's first look at how we would use underscore. Below is an example of a template to store the properties using underscore.
<script type="text/underscoreTemplate" id="map_data">
[<% for(var i=0; i<areas.length; i++) { var a=areas[i]; %>
{
mapKey: "<%= mapKey %>",
mapValue: "<%= a.mapValue %>",
type: "<%= a.shape %>",
link: "<%= a.link %>",
alt: "<%= a.alt %>",
perPixelTargetFind: <%= a.perPixelTargetFind %>,
selectable: <%= a.selectable %>,
hasControls: <%= a.hasControls %>,
lockMovementX: <%= a.lockMovementX %>,
lockMovementY: <%= a.lockMovementY %>,
lockScaling: <%= a.lockScaling %>,
lockRotation: <%= a.lockRotation %>,
hasRotatingPoint: <%= a.hasRotatingPoint %>,
hasBorders: <%= a.hasBorders %>,
overlayFill: null,
stroke: "<#000000>",
strokeWidth: 1,
transparentCorners: true,
borderColor: "<black>",
cornerColor: "<black>",
cornerSize: 12,
transparentCorners: true,
pattern: "<%= a.pattern %>",
<% if ( (a.pattern) != "" ) { %>fill: "#00ff00",<% } else {
%>fill: "<%= a.fill %>",<% } %> opacity: <%= a.opacity %>,
top: <%= a.top %>, left: <%= a.left %>, scaleX: <%= a.scaleX %>,
scaleY: <%= a.scaleY %>,
<% if ( (a.shape) == "circle" ) { %>radius: <%= a.radius %>,<% }
%><% if ( (a.shape) == "ellipse" ) { %>width: <%= a.width %>,
height: <%= a.height %>,<% }
%><% if ( (a.shape) == "rect" ) { %>width: <%= a.width %>,,
height: <%= a.height %>,<% }
%><% if ( (a.shape) == "polygon" ) { %>points: [<% for(var j=0; j<a.coords.length-1; j = j+2) {
var checker = j % 6; %> <% if ( (checker) == 0 ) {
%>{x: <%= (a.coords[j] - a.left)/a.scaleX %>, y: <%= (a.coords[j+1] - a.top)/a.scaleY %>}, <% }
else { %>{x: <%= (a.coords[j] - a.left)/a.scaleX %>, y: <%= (a.coords[j+1] - a.top)/a.scaleY %>}, <% }
} %>]<% } %>},<% } %>
]
</script>
In order to load the underscore Template above we create an array using the corresponding values from the fabric elements as shown below. Please keep in mind that I hard-coded some properties to suit my own needs for the website I was building and you can modify this to suit your own needs.
function createObjectsArray(t) {
fabric.Object.NUM_FRACTION_DIGITS = 10;
mapKey = $('#txtMapKey').val();
if ($.isEmptyObject(mapKey)) {
mapKey = "home1";
$('#txtMapKey').val(mapKey);
}
// loop through all objects & assign ONE value to mapKey
var objects = canvas.getObjects();
canvas.forEachObject(function(object){
object.mapKey = mapKey;
});
canvas.renderAll();
canvas.calcOffset()
clearNodes();
var areas = []; //note the "s" on areas!
_.each(objects, function (a) {
var area = {}; //note that there is NO "s" on "area"!
area.mapKey = a.mapKey;
area.link = a.link;
area.alt = a.alt;
area.perPixelTargetFind = a.perPixelTargetFind;
area.selectable = a.selectable;
area.hasControls = a.hasControls;
area.lockMovementX = a.lockMovementX;
area.lockMovementY = a.lockMovementY;
area.lockScaling = a.lockScaling;
area.lockRotation = a.lockRotation;
area.hasRotatingPoint = a.hasRotatingPoint;
area.hasBorders = a.hasBorders;
area.overlayFill = null;
area.stroke = '#000000';
area.strokeWidth = 1;
area.transparentCorners = true;
area.borderColor = "black";
area.cornerColor = "black";
area.cornerSize = 12;
area.transparentCorners = true;
area.mapValue = a.mapValue;
area.pattern = a.pattern;
area.opacity = a.opacity;
area.fill = a.fill;
area.left = a.left;
area.top = a.top;
area.scaleX = a.scaleX;
area.scaleY = a.scaleY;
area.radius = a.radius;
area.width = a.width;
area.height = a.height;
area.rx = a.rx;
area.ry = a.ry;
switch (a.type) {
case "circle":
area.shape = a.type;
area.coords = [a.left, a.top, a.radius * a.scaleX];
break;
case "ellipse":
area.shape = a.type;
var thisWidth = a.width * a.scaleX;
var thisHeight = a.height * a.scaleY;
area.coords = [a.left - (thisWidth / 2), a.top - (thisHeight / 2), a.left + (thisWidth / 2), a.top + (thisHeight / 2)];
break;
case "rect":
area.shape = a.type;
var thisWidth = a.width * a.scaleX;
var thisHeight = a.height * a.scaleY;
area.coords = [a.left - (thisWidth / 2), a.top - (thisHeight / 2), a.left + (thisWidth / 2), a.top + (thisHeight / 2)];
break;
case "polygon":
area.shape = a.type;
var coords = [];
_.each(a.points, function (p) {
newX = (p.x * a.scaleX) + a.left;
newY = (p.y * a.scaleY) + a.top;
coords.push(newX);
coords.push(newY);
});
area.coords = coords;
break;
}
areas.push(area);
});
if(t == "map_template") {
$('#myModalLabel').html('Image Map HTML');
$('#textareaID').html(_.template($('#map_template').html(), { areas: areas }));
$('#myModal').on('shown', function () {
$('#textareaID').focus();
});
$("#myModal").modal({
show: true,
backdrop: true,
keyboard: true
}).css({
"width": function () {
return ($(document).width() * .6) + "px";
},
"margin-left": function () {
return -($(this).width() / 2);
}
});
}
if(t == "map_data") {
$('#myModalLabel').html('Custom JSON Objects Data');
$('#textareaID').html(_.template($('#map_data').html(), { areas: areas }));
$('#myModal').on('shown', function () {
$('#textareaID').focus();
});
$("#myModal").modal({
show: true,
backdrop: true,
keyboard: true
}).css({
"width": function () {
return ($(document).width() * .6) + "px";
},
"margin-left": function () {
return -($(this).width() / 2);
}
});
}
return false;
};
If you want to use JSON.stringify(canvas) then you need to do some extra work. The most important thing to understand about building image maps is that you need accuracy up to 10 decimal places or your image maps will not align properly, especially in the case of polygons. When you use underscore this issue doesn't come because you read the position and point properties accuratly to the required number of decimal places. But JSON.stringify(canvas) rounds of this data to 2 decimal places which results in dramatic misalignment in image maps. I realized this problem early on which is why I initially used the template approach for accuracy. Then in a post, Stefan Kienzle, was kind enough to point out that Fabric has a solution for this issue in that you can set the number of decimal places in a fabric canvas as follows:
fabric.Object.NUM_FRACTION_DIGITS = 10;
This solved one of the problems with using JSON.stringify(canvas). Another issue is that you need to include a few custom properties for image maps and other properties not normally serialized by "stringfy." For example, you will need to add a few extra properties to all fabric object types that we use in image maps and add code that will include the serialization of these custom properties. In fabric to add properties we can either subclass an existing element type or we can extend a generic fabric element's "toObject" method. It would be crazy to subclass just one type of element since our custom properties need to apply to any type of element. Instead, we can just extended a generic fabric element's "toObject" method for the additional properties like: mapKey, link, alt, mapValue, and pattern for our image maps and fabric properties lockMovementX, lockMovementY, lockScaling, and lockRotation as shown below.
canvas.forEachObject(function(object){
// Bill SerGio - We add custom properties we need for image maps here to fabric
// Below we extend a fabric element's toObject method with additional properties
// In addition, JSON doesn't store several of the Fabric properties !!!
object
});
};
})(object.toObject);
...
});
canvas.renderAll();
canvas.calcOffset();
We create the image map by drawing our sections on top of an existing image, a "background" image that we make the background of our canvas. For my own purposes in the editor I do not serialize this background image. In fact, for my own purposes I remove the background image prior to serialization and add it back after serialization so that it is not part of the serialed data. You can change this to suit your own preferences. One of the reasons I did this is because the fullpath of the background image is serialized and unless you are restoring with the same path you will have an issue. I need a relative path for my own purposes. The "background" image is added as follows.
canvas.setBackgroundImage(backgroundImage, canvas.renderAll.bind(canvas));
I wanted to place all of the controls in a single row to allow as much space as possible for editing. I decided to use the bootstrap library and the bootstratp "navbar" for controls for a clean look as shown below.
NavBar Features from left to right:
When I later added zoom I also changed the NavBar controls to stay at the top of the page so that I could still click on an item in the NavBar when I was adding the nodes for a polygon and the page was scrooled down. To accomplish I used Bootstrap’s class ‘navbar-fixed-top’ as follows:
<nav class="navbar navbar-fixed-top">
<div class="navbar-inner">
... etc.
Please keep in mind that this editor is not meant to be a general purpose editor or a drawing program. I created it to do one thing which is to create the html for an image map. The toolbar includes all the basic geometric shapes in a standard image map including circle, ellipse, rectangle, and polygon. I added text just as a demo but text is not part of a standard image map. The reader is free to add other Fabric shapes and options.
We listen for the mousedown event on the Fabric canvas as follows:
var activeFigure;
var activeNodes;
canvas.observe('mouse:down', function (e) {
if (!e.target) {
add(e.e.layerX, e.e.layerY);
} else {
if (_.detect(shapes, function (a) { return _.isEqual(a, e.target) })) {
if (!_.isEqual(activeFigure, e.target)) {
clearNodes();
}
activeFigure = e.target;
if (activeFigure.type == "polygon") {
addNodes();
}
$('#hrefBox').val(activeFigure.link);
$('#titleBox').val(activeFigure.title);
$('#groupsBox').val(activeFigure.groups);
}
}
});
When a user clicks on the circle on the toolbar it sets the activeFigure equal to the figure type of the object to be added such as "circle" or "polygon." Where we position these objects initially on our canvas isn't important because we will be moving and re-shaping them to exactly match the areas of our image map. Then when the user clicks on the canvas, the selected figure type of object is added to the canvas using the following. Please keep in mind that I created this editor to meet my own immediate needs in creating image maps. You can easily customize the features of this editor to meet meet your own needs or preferences.
function add(left, top) {
if (currentColor.length < 2)
{
currentColor = '#fff';
}
if ((window.figureType === undefined) || (window.figureType == "text"))
return false;
var x = (window.pageXOffset !== undefined) ? window.pageXOffset : (document.documentElement || document.body.parentNode || document.body).scrollLeft;
var y = (window.pageYOffset !== undefined) ? window.pageYOffset : (document.documentElement || document.body.parentNode || document.body).scrollTop;
//stroke: String, when 'true', an object is rendered via stroke and this property specifies its color
//strokeWidth: Number, width of a stroke used to render this object
if (figureType.length > 0) {
var obj = {
left: left,
top: top,
fill: ' ' + currentColor,
opacity: 1.0,
fontFamily: 'Impact',
stroke: '#000000',
strokeWidth: 1,
textAlign: 'right'
};
var objText = {
left: left,
top: top,
fontFamily: 'Impact',
strokeStyle: '#c3bfbf',
strokeWidth: 3,
textAlign: 'right'
};
var shape;
switch (figureType) {
case "text":
//var text = document.getElementById("txtAddText").value;
var text = gText;
shape = new fabric.Text ( text , obj);
shape.scaleX = shape.scaleY = canvasScale;
shape.lockUniScaling = true;
shape.hasRotatingPoint = true;
break;
case "square":
obj.width = 50;
obj.height = 50;
shape = new fabric.Rect(obj);
shape.scaleX = shape.scaleY = canvasScale;
shape.lockUniScaling = false;
break;
case "circle":
obj.radius = 50;
shape = new fabric.Circle(obj);
shape.scaleX = shape.scaleY = canvasScale;
shape.lockUniScaling = true;
break;
case "ellipse":
obj.width = 100;
obj.height = 50;
obj.rx = 100;
obj.ry = 50;
shape = new fabric.Ellipse(obj);
shape.scaleX = shape.scaleY = canvasScale;
shape.lockUniScaling = false;
break;
case "polygon":
//$('#btnPolygonClose').show();
$('#closepolygon').show();
obj.selectable = false;
if (!currentPoly) {
shape = new fabric.Polygon([{ x: 0, y: 0}], obj);
shape.scaleX = shape.scaleY = canvasScale;
lastPoints = [{ x: 0, y: 0}];
lastPos = { left: left, top: top };
} else {
obj.left = lastPos.left;
obj.top = lastPos.top;
obj.fill = currentPoly.fill;
// while we are still adding nodes let's make the element
// semi-transparent so we can see the canvas background
// we will reset opacity when we close the nodes
obj.opacity = .4;
currentPoly.points.push({x: left-lastPos.left, y: top-lastPos.top });
shapes = _.without(shapes, currentPoly);
lastPoints.push({ x: left - lastPos.left, y: top-lastPos.top })
shape = repositionPointsPolygon(lastPoints, obj);
canvas.remove(currentPoly);
}
currentPoly = shape;
break;
}
shape.link = $('#hrefBox').val();
shape.alt = $('#txtAltValue').val();
mapKey = $('#txtMapKey').val();
shape.mapValue = $('#txtMapValue').val();
// Bill SerGio - We add custom properties we need for image maps here to fabric
// Below we extend a fabric element's toObject method with additional properties
// In addition, JSON doesn't store several of the Fabric properties !!!
shape
});
};
})(shape.toObject);
shape.mapKey = mapKey;
shape.link = '#';
shape.alt = '';
shape.mapValue = '';
shape.pattern = '';
lockMovementX = false;
lockMovementY = false;
lockScaling = false;
lockRotation = false;
canvas.add(shape);
shapes.push(shape);
if (figureType != "polygon") {
figureType = "";
}
} else {
deselect();
}
}
Many virtual design websites need to apply not just a color to a map area but need to also apply a pattern and color. Below are the two methods I created to apply patterns to fabric elements in my fabric canvas.
// "title" is the mapValue & "img" is the short path for the pattern image
function SetMapSectionPattern(title, img) {
canvas.forEachObject(function(object){
if(object.mapValue == title){
loadPattern(object, img);
}
});
canvas.renderAll();
canvas.calcOffset()
clearNodes();
}
function loadPattern(obj, url) {
obj.pattern = url;
var tempX = obj.scaleX;
var tempY = obj.scaleY;
var zfactor = (100 / obj.scaleX) * canvasScale;
fabric.Image.fromURL(url, function(img) {
img.scaleToWidth(zfactor).set({
originX: 'left',
originY: 'top'
});
// You can apply regualr or custom image filters at this point
//img.filters.push(new fabric.Image.filters.Sepia(),
//new fabric.Image.filters.Brightness({ brightness: 100 }));
//img.applyFilters(canvas.renderAll.bind(canvas));
//img.filters.push(new fabric.Image.filters.Redify(),
//new fabric.Image.filters.Brightness({ brightness: 100 }));
//img.applyFilters(canvas.renderAll.bind(canvas));
var patternSourceCanvas = new fabric.StaticCanvas();
patternSourceCanvas.add(img);
var pattern = new fabric.Pattern({
source: function() {
patternSourceCanvas.setDimensions({
width: img.getWidth(),
height: img.getHeight()
});
return patternSourceCanvas.getElement();
},
repeat: 'repeat'
});
fabric.util.loadImage(url, function(img) {
// you can customize what properties get applied at this point
obj.fill = pattern;
canvas.renderAll();
});
});
}
Since there are in any virtual designer a lot of possible pattern images I added a slider to the drop down for the patterns in the editor. In order to apply a pattern to a section, i.e., "mapValue," of the objects in the canvas you first need to click the refresh symbol on the toolbar that will load the existing mapValues in the canvas into the drop down on the left of the refresh symbol as show below. Then select a mapVlue from the mapValues drop down. Next you can select a pattern from the patterns drop down and it will be applied to all the objects with the mapValue you selected. I created a short video to illustrate this on YouTube.
As soon as I began using my image map editor I quickly realized that I would have to add zoom. My image map had some really tiny areas where I need to create polygons so I added the ability to zoom in on the map as follows.
// Zoom In
function zoomIn() {
// limiting the canvas zoom scale
if (canvasScale < 4.9) {
canvasScale = canvasScale * SCALE_FACTOR;
canvas.setHeight(canvas.getHeight() * SCALE_FACTOR);
canvas.setWidth(canvas.getWidth() * SCALE_FACTOR);
var objects = canvas.getObjects();
for (var i in objects) {
var scaleX = objects[i].scaleX;
var scaleY = objects[i].scaleY;
var left = objects[i].left;
var top = objects[i].top;
var tempScaleX = scaleX * SCALE_FACTOR;
var tempScaleY = scaleY * SCALE_FACTOR;
var tempLeft = left * SCALE_FACTOR;
var tempTop = top * SCALE_FACTOR;
objects[i].scaleX = tempScaleX;
objects[i].scaleY = tempScaleY;
objects[i].left = tempLeft;
objects[i].top = tempTop;
objects[i].setCoords();
}
canvas.renderAll();
canvas.calcOffset();
}
}
I quickly noticed that when I was zoomed in on the canvas and the page was scrolled down and I clicked on a nav button that the window would scroll up to the top and I would have to manually scroll down again to the area I was working on. There are several ways to fix this but I decided to use on the button links in the toolbar the following simple solution that prevents a click on the link from scrolling the browser window up to the navbar.
href="javascript:void(0)"
The next issue I ran into was that of the zoom factor or scaleX and scaleY of the fabric objects created. If all the fabric objects added to the canvas have scaleX = 1.0 and scaleY = 1.0 then work nicely. But if you are zoomed in and add an object then these scale values aren't 1 and things get a bit trixky when saving and restoring the map. I finally figured out that the best thing was to make sure that the whole canvas is zoomed down to it's normal setting of 1:1. Why? Because when we restore a svaed map we are always restoring the saved objects to a canvas scaled at 1:1.
When I started to wrote this image map editor I had only used Fabric to create the editor so I could create an image map for ImageMapster plugin. Then, somewhere during the process of writing this editor I had an epiphany! It dawned on me that using the Fabric canvas as an "image map" was far superior to using a standard image map! In other words, I could take an image and divide it up into sections, i.e., "mapValues", and color those sections, add patterns to those sections or animate those sections to create a kind of super image map. So feel free to use and customize this editor to create standard image maps or to create fabric "image maps" that have a lot more features than the standard image map.
There are two ways to use this editor, namely, to create the <map> html for use with ImageMapster, or to create a fabric canvas that works exactly like an image map but has many more features. One of the things to be aware of if you use ImageMapster is that ImageMapster's "p.addAltImage = function (context, image, mapArea, options) {" function was not really written to work with the idea of using small images to fill large areas by applying a "pattern" to a section of an image map. So, as a heads up, I want to point out that you will need to either modify Imagemapster's "p.addAltImage" or add a new function to ImageMapster's plugin similar to the following in order to accomplish this as follows:
// Add a function like this to the ImageMapster plugin to apply "patterns" to map sections
p.addPatternImage = function (context, image, mapArea, options) {
context.beginPath();
this.renderShape(context, mapArea);
context.closePath();
context.save();
context.clip();
context.globalAlpha = options.altImageOpacity || options.fillOpacity;
//you can replace the line below with one that positions a smaller pattern reactangular exactly over map area to save memory
context.clearRect(0, 0, mapArea.owner.scaleInfo.width, mapArea.owner.scaleInfo.height); // Clear the last image if it exists.
var pattern = context.createPattern(image, 'repeat'); // Get the direction from the button.
context.fillStyle = pattern; // Assign pattern as a fill style.
context.fillRect(0, 0, mapArea.owner.scaleInfo.width, mapArea.owner.scaleInfo.height); // Fill the canvas.
};
I also added a file, i.e., map2json.htm, to the project with a sample image map and the code to convert an existing image map into a fabric canvas with corresponding image map elements for editing. You will have to tweek the code to change the variable names to match your own though.
As I said earlier, I had an epiphany when I realized that I can use a Fabric canvas to replace the old image map but this editor will do the job for either. In addition, as mentioned above, using "javascript:void(0)" instead of "#" prevents scrolling which clicking on the navbar was a really useful tip I found on the web.
I used VisualStudio as my web editor but the editor itself is just an ordinary "html" file, i.e., "ImageMapEditor.htm," that you can just double click on and run in any web browser to use it.
Chrome Frame Plugin. I recommend installing the Chrome Frame Plugin: the IE users behind. Just think the amount of time that a web developer saves without having to code hacks and workarounds for IE.
You can decide for yourself which is better, Imagemapster and a standard image map, OR using a fabric canvas with fabric objects that adds many more cool features. Of course, it depends on your needs and what the clients wants! At least this editor will allow you to create both of these and test them against each other.. | https://www.codeproject.com/script/articles/view.aspx?aid=593037 | CC-MAIN-2017-04 | refinedweb | 4,198 | 59.4 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
#define _declare_box8( \
pool, \ /* Name of the memory pool variable. */
size, \ /* Number of bytes in each block. */
cnt ) \ /* Number of blocks in the memory pool. */
U64 pool[((size+7)/8)*(cnt) + 2]
The _declare_box8 macro declares an array of bytes that can
be used as a memory pool for allocation of fixed blocks with 8-byte
alignment.
The argument pool specifies the name of the memory pool
variable that is used by the memory block allocation routines. The
argument size specifies the size of the blocks, in bytes. The
argument cnt specifies the number of blocks required in the
memory pool.
The _declare_box8 macro is part of RL-RTX. The definition
is in rtl.h.
The _declare_box8 macro does not return any value.
_alloc_box, _calloc_box, _free_box, _init_box8
#include <rtl.h>
/* Reserve a memory for 25 blocks of 30-bytes. */
_declare_box8(mpool,30,25);
void membox_test (void) {
U8 *box;
U8 *cbox;
_init_box8 (mpool, sizeof (mpool), 30);
box = _alloc_box (mpool);
/* This block is initialized to 0. */
cbox = _calloc_box (mpool);
/* 'box' and 'cbox' are always 8-byte aligned. */
..
. | https://www.keil.com/support/man/docs/rlarm/rlarm__declare_box8.htm | CC-MAIN-2020-34 | refinedweb | 189 | 76.72 |
CS::RenderManager::LightingVariablesHelper Class Reference
Helper class to deal with shader variables setup for lighting. More...
#include <csplugincommon/rendermanager/lightsetup.h>
Detailed Description
Helper class to deal with shader variables setup for lighting.
Definition at line 250 of file lightsetup.h.
Constructor & Destructor Documentation
Construct.
Definition at line 279 of file lightsetup.h.
Member Function Documentation
Create a shader variable which is only valid for this frame.
Create a temporary shader variable (using CreateTempSV) and put it onto stack.
Merge a shader variable into a stack as an item of a shader variable.
The variable in the destination stack with the name of sv is an array variable or does not exist gets the array item with the index index set to sv.
- Returns:
- Whether the destination stack was large enough to contain sv.
Merge an array of shader variables into a stack as items of a shader variables.
- See also:
- MergeAsArrayItem
The documentation for this class was generated from the following file:
- csplugincommon/rendermanager/lightsetup.h
Generated for Crystal Space 2.1 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api/classCS_1_1RenderManager_1_1LightingVariablesHelper.html | CC-MAIN-2016-30 | refinedweb | 178 | 51.04 |
Building blog CMS in ReasonML with GraphQL and Serverless using Hasura
Vladimir Novick
・10 min read
This is the first part of blog post series where we will create blog cms using Hasura for GraphQL API, and serverless functions for logic and on the client we will write modern and robust code using ReasonML syntax. Let's get started.
ReasonML intro
First of all, before getting into actual code writing, let's discuss why ReasonML? Even though it's a topic for a stand-alone blog post, I will try to give you a brief overview. ReasonML gives us a fantastic type system powered by Ocaml, but as far as syntax goes, it looks pretty close to Javascript. It was invented by Jordan Walke, the guy who created React and is used in production at Facebook messenger. Recently various companies also adopted Reason and use it in production because of it's really cool paradigm: "If it compiles - it works."
This phrase is a very bold statement, but in fact, because Reason is basically a new syntax of OCaml language, it uses Hindler Miller type system so it can infer types in compile time.
What it means for us as developers?
It means that typically we don't write that many types, if at all, as we write in TypeScript for example and can trust the compiler to infer these types.
Speaking of compilation, Reason can be compiled to OCaml, which in turn can compile to various targets such as binary, ios, android etc, and also we can compile to human-readable JavaScript with the help of Bucklescript compiler. In fact that what we will do in our blog post.
What about npm and all these packages we are used to in JavaScript realm?
In fact, BuckleScript compiler gives us powerful Foreign function interface FFI that lets you use JavaScript packages, global variables, and even raw javascript in your Reason code. The only thing that you need to do is to accurately type them to get the benefits from the type system.
Btw if you want to learn more about ReasonML, I streamed 10h live coding Bootcamp on Youtube, that you can view on my channel
ReasonReact
When using Reason for our frontend development, we will use ReasonReact. There are also some community bindings for VueJs, but mainly, when developing for web we will go with ReasonReact. If you've heard about Reason and ReasonReact in the past, recently ReasonReact got a huge update making it way easier to write, so the syntax of creating Reason components now is not only super slick but looks way better than in JavaScript, which was not the case in the past. Also, with the introduction of hooks, it's way easier to create ReasonReact components and manage your state.
Getting started
In official ReasonReact docs, the advised way to create a new project is to start with
bsb init command, but let's face it. You probably want to know how to move from JavaScript and Typescript. So in our example, we will start by creating our project with create-react-app.
We will start by running the following command:
npx create-react-app reason-hasura-demo
It will create our basic React app in JavaScript, which we will now change into ReasonReact.
Installation
If it's the first time you set up ReasonML in your environment, it will be as simple as installing bs-platform.
yarn global add bs-platform
Also, configure your IDE by installing appropriate editor plugin
I use reason-vscode extension for that. I also strongly advise using
"editor.formatOnSave": true, vscode setting, because Reason has a tool called
refmt which is basically built in Prettier for Reason, so your code will be properly formatted on save.
Adding ReasonML to your project
Now it's time to add ReasonML. We will install
bs-platform and
reason-react dependencies.
yarn add bs-platform --dev --exact yarn add reason-react --exact
And get into the configuration. For that create
bsconfig.json file with the following configuration:
{ "name": "hasura-reason-demo-app", "reason": { "react-jsx": 3 }, "bsc-flags": ["-bs-super-errors"], "sources": [ { "dir": "src", "subdirs": true } ], "package-specs": [ { "module": "es6", "in-source": true } ], "suffix": ".js", "namespace": true, "bs-dependencies": [ "reason-react" ], "ppx-flags": [], "refmt": 3 }
Let's also add compilation and watch scripts to our package.json
"re:build": "bsb -make-world -clean-world", "re:watch": "bsb -make-world -clean-world -w",
If you run these scripts, what will basically happen is all
.re files in your project will be compiled to javascript alongside your
.re files.
Start configuring our root endpoint
Let's write our first reason file, by changing index.js from
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App';();
to
Basically what I am doing here is render my App component into the dom with
And with
I import register and unregister methods from
serviceWorker.js file so I can use Javascript in Reason.
to run our project, we need to run
npm run re:watch
so our Bucklescript will build files for the first time and watch for changes whenever new files are added.
and in different tab let's just run
npm start and see our React app.
Basic styling
Styling with ReasonML can be either typed due to
bs-css which is based on
emotion or untyped. For simplicity, we will use untyped. Let's delete index.css and App.css we have from 'create-react-app', create
styles.css file and import two packages:
yarn add animate.css yarn add tailwind --dev
now in our
styles.css file, we will import tailwind
@tailwind base; @tailwind components; @tailwind utilities;
and add styles build script in
package.json
"rebuild-styles": "npx tailwind build ./src/styles.css -o ./src/index.css",
Writing our first component.
Let's rename our App.css file to App.re, delete all its contents, and write simple ReasonReact component.
Nice right? With ReasonML, we don't need to import or export packages, and in fact, each file is a module, so if our file name is App.re, we can simply use component in a different file.
String to element
In ReasonReact, if you want to add text in component, you do it by using
ReasonReact.string
Also, I prefer the following syntax:
You will see it quite a lot in this project. This syntax is reverse-application operator or pipe operator that will give you an ability to chain functions so
f(x) is basically written as
x |> f.
Now you might say, but wait a second that will be a tedious thing to do in ReasonReact. every string needs to be wrapped with ReasonReact.string. There are various approaches to that.
A common approach is to create
utils.re file somewhere with something like
let ste = ReasonReact.string and it will shorten our code to
Through the project, I use
ReasonReact.string with a pipe so the code will be more self-descriptive.
What we will be creating
So now when we have our ReasonReact app, it's time to see what we will be creating in this section:
This app will be a simple blog, which will use GraphQL API, auto-generated by Hasura, will use subscriptions and ReasonReact.
Separate app to components
We will separate apps to components such as
Header,
PostsList,
AddPostsForm and
Modal.
Header
Header will be used for top navigation bar as well as for rendering "Add New Post" button on the top right corner, and when clicking on it, it will open a Modal window with our
AddPostsForm.
Header will get
openModal and
isModalOpened props and will be just a presentational component.
We will also use javascript
require to embed an SVG logo in the header.
Header button will stop propagation when clicked using
ReactEvent.Synthetic ReasonReact wrapper for React synthetic events and will call
openModal prop passed as labeled argument (all props are passed as labeled arguments in ReasonReact).
Modal
Modal component will also be a simple and presentational component
For modal functionality in our
App.re file, we will use
useReducer React hook wrapped by Reason like so:
Notice that our
useReducer uses pattern matching to pattern match on
action variant. If we will, for example, forget
Close action, the project won't compile and give us an error in the editor.
PostsList, Post
Both PostsList and Post will be just presentational components with dummy data.
AddPostForm
Here we will use React
setState hook to make our form controlled. That will be also pretty straightforward:
onChange event will look a bit different in Reason but that mostly because of it's type safe nature:
<input onChange={e => e->ReactEvent.Form.target##value |> setCoverImage }/>
Adding GraphQL Backend using Hasura
Now it's time to set GraphQL backend for our ReasonReact app. We will do that with Hasura.
In a nutshell, Hasura auto-generates GraphQL API on top of new or existing Postgres database. You can read more about Hasura in the following blog post blog post or follow Hasura on Youtube [channel](.
We will head to hasura.io and click on Docker image to go to the doc section explaining how to set Hasura up on docker.
We will also install Hasura cli and run
hasura init to create a folder with migrations for everything that we do in the console.
Once we have Hasura console running, let's set up our posts table:
and users table:
We will need to connect our posts and users by going back to posts table -> Modify and set a Foreign key to users table:
We will also need to set relationships between posts and users so user object will appear in auto-generated GraphQL API.
Let's head to the console now and create first dummy user:
mutation { insert_users(objects: {id: "first-user-with-dummy-id", name: "Test user"}) { affected_rows } }
Let's now try to insert a new post:
mutation { insert_posts(objects: {user_id: "first-user-with-dummy-id", title: "New Post", content: "Lorem ipsum - test post", cover_img: ""}) { affected_rows } }
If we query our posts now will get all the data that we need for our client:
query getPosts{ posts { title cover_img content created_at user { name avatar_url } } }
Adding GraphQL to our app
Let's install a bunch of dependencies to add GraphQL to our ReasonReact app and start getting blog posts in real-time.
yarn add @glennsl/bs-json apollo-boost apollo-link-ws graphql react-apollo reason-apollo subscriptions-transport-ws
When we work with Reason, we want to run an introspection query to our endpoint so we will get our graphql schema introspection data as json. It will be used to give us graphql queries completion and type checking in the editor later on, which is pretty cool and best experience ever.
yarn send-introspection-query
We also need to add
bs-dependencies to our
bsconfig.json
"bs-dependencies": [ "reason-react", "reason-apollo", "@glennsl/bs-json" ], "ppx-flags": ["graphql_ppx/ppx"]
We've added
graphql_ppx ppx flag here - that will allow us to write GraphQL syntax in ReasonML later on.
Now let's create a new
ApolloClient.re file and set our basic ApolloClient
Adding queries and mutations
Queries
Let's head to our
PostsList.re component and add the same query we ran previously in Hasura graphiql:
Now we can use
GetPostsQuery component with render prop to load our posts. But before that, I want to receive my GraphQL API result typed, so I want to convert it to Records.
It's as simple as adding types in
PostTypes.re file
open PostTypes
The final version of
PostsList component will look as following:
Mutations
To add mutation to our
AddPostForm, we start in the same way as with queries:
The change will be in the render prop. We will use the following function to create variables object:
let addNewPostMutation = PostMutation.make(~title, ~content, ~sanitize, ~coverImg, ());
to execute mutation itself, we simply need to run
mutation( ~variables=addNewPostMutation##variables, ~refetchQueries=[|"getPosts"|], (), ) |> ignore;
The final code will look like this:
Adding Subscriptions
To add subscriptions we will need to make changes to our
ApolloClient.re. Remember we don't need to import anything in Reason, so we simply start writing.
Let's add
webSocketLink
and create a link function that will use
ApolloLinks.split to target WebSockets, when we will use subscriptions or
httpLink if we will use queries and mutations. The final ApolloClient version will look like this:
Now to change from query to subscription, we need to change word
query to
subscription in graphql syntax and use
ReasonApollo.CreateSubscription instead of
ReasonApollo.CreateQuery
Summary and what's next
In this blog post, we've created a real-time client and backend using Hasura, but we haven't talked about Serverless yet. Serverless business logic is something we will look into in the next blog post. Meanwhile, enjoy the read and start using ReasonML.
You can check out the code here: and follow me on Twitter @VladimirNovick for updates.
From gqlgen to GraphQL.js: a story of “choosing the right tool for the right job”
Louis Tsai -
Typescript HOCs with Apollo in React - Explained.
George Shevtsov -
Hey, now I know React, so what’s next? 🧐
Miguel Jiménez Benajes -
Looks like you don't have the user field set up on posts type. You have user_id so how would you return the name and avatar from the
userstable?
this query doesnt work if i have the same set up as you:
Solution is tracking the foreign keys on your tables. See:docs.hasura.io/1.0/graphql/manual/...
Not exactly. an easier solution is what I wrote. You need to add relationships in relationships tab. I added a screenshot to clarify
Nice article!
One small thing though about this part
The result already comes typed but as a bound object (
{ . "etc": int}) so saying "to receive the result typed" can be a bit confusing for beginners IMO
I’ve no experience with TypeScript and ReasonML. Though, I’m interested to try writing statically typed code in my next project. Which of the two do you recommend for a beginner?
Anyhow, thanks for writing this. I’ll be playing around with Hasura in my next project so hopefully it’ll be a good experience.
I would always prefer Reason over typescript because you actually don’t need to type lots of things because compiler will infer types, but you have to be aware of the fact that Reason only looks like JavaScript. It’s way more powerful but you need to understand functional programming constructs such as pattern matching, variants and more. I suggest checking my YouTube channel for 10h ReasonML bootcamp that will cover the basics and even some advanced parts of ReasonML. And soon there will be more content on ReasonReact so stay tuned.
Thanks for the explanation. Maybe I’ll try ReasonML first.
10h...that’s long haha. If it’s that long, I reckon I can learn a lot about ReasonML from it.
Keep up the good content.
It was 4 days online bootcamp 3+ h every day. | https://dev.to/hasurahq/building-blog-cms-in-reasonml-with-graphql-and-serverless-using-hasura-part-1-4c6h | CC-MAIN-2019-30 | refinedweb | 2,520 | 62.27 |
django-adminplus 0.3
Add new pages to the Django admin.” (the Django model admin is a default module) but there seems to be a lot of boiler plate code to do it. django-admin-tools does not, as far as I can tell, support adding custom pages.
All AdminPlus does is allow you to add simple custom views (well, they can be as complex as you like!) without mucking about with hijacking URLs, and providing links to them right in the admin index.
Installing AdminPlus
Install from PyPI with pip:
pip install django-adminplus
Or get.sites', view=my_view) # And of course, this still works: from someapp.models import MyModel admin.site.register(MyModel)
Now my_view will be accessible at admin/somepath and there will be a link to it in the Custom Views section of the admin index.
You can also use register_view as a decorator:
@admin.site.register_view('somepath') def my_view(request): pass
register_view takes some optional arguments:
name: a friendly name for display in the list of custom views. For example:
def my_view(request): """Does something fancy!""" admin.site.register_view('somepath', 'My Fancy Admin View!', view=my_view)
urlname: give a name to the urlpattern so it can be called by redirect(), reverse(), etc.
visible: a boolean that defines if the custom view is visible in the admin dashboard.
All registered views are wrapped in admin.site.admin_view.
Note
Views with URLs that match auto-discovered URLs (e.g. those created via ModelAdmins) will override the auto-discovered URL.
- Downloads (All Versions):
- 0 downloads in the last day
- 81 downloads in the last week
- 3677 downloads in the last month
- Author: James Socol
- License: BSD
- Categories
- Development Status :: 4 - Beta
- Environment :: Web Environment
- Environment :: Web Environment :: Mozilla
-: jsocol
- DOAP record: django-adminplus-0.3.xml | https://pypi.python.org/pypi/django-adminplus/0.3 | CC-MAIN-2016-18 | refinedweb | 298 | 57.16 |
in reply to
Unloading a perl module
Better to declare them in httpd.conf or a startup.pl type script.
# httpd.conf
PerlModule Date::Manip
# or startup.pl
use Date::Manip ();
[download]
This allows them to be cached on server boot and shared between child processes. The () after the module in startup.pl is very important as it means the module won't polute your global namespace (exports nothing).
This gives you the performance advantage of having the module already compiled and cached, while keeping your memory requirements (per child) down. Make sure to still use the module in your hanlder/script though; while not always nesseccary it gets pretty confusing if you don't.
If you're running Apache::Registry or PerlRun the rules are slightly different. Check the guide for more info.
I've tested this in the past. I found that even
if you don't have a startup.pl
and just let your cgi scripts use the modules, they
still get loaded into global namespace to be shared
by all the children. I didn't do a lot of testing in
this area, so your results may vary. If they do, I'd love
to hear more about it.
____________________ | http://www.perlmonks.org/?node_id=125293 | CC-MAIN-2015-32 | refinedweb | 204 | 76.52 |
",...
Operating system in a mainframe and personal computer
What are the differences between operating systems that are used in a mainframe computer and personal computer?Thanks! We'll email you
when relevant content is
added and updated.
Would anyone be able to tell me what MIS is?. laptop won’t shut down
I have a Toshiba laptop that is about a year old. One day it was installing updates as it shut down, but I was in a rush so I pushed the power button to force it to shut down. Now, whenever I shut it down, it says it is installing updates but it never shuts down, even if I leave it overnight. I.....
Sorting of students strength on one form in Microsoft Access
I have a table named Students with the following fields and data ID Level Session Gender Regular/Plus Shift Program Section: 13 INTERMEDIATE 2013-2015 MALE REGULAR MORNING FSC ENG FSC B-4 14 INTERMEDIATE 2013-2015 MALE REGULAR MORNING FSC ENG FSC B-3 15 INTERMEDIATE 2013-2015 MALE REGULAR MORNING......Thanks! We'll email you
when relevant content is
added and updated.Thanks! We'll email you
when relevant content is
added and updated. write a C++ program in a short way
#include<iostream>,#include<fstream>,#include<vector>,#include<string>,#include<algorithm> using namespace std; int main() { vector<vector >Container1;// prototype of the functions (Declaring functions) vectorContainer2;//memory with no limit it holds a...
Not able to login to AS/400
A user is not able to connect to AS/400. His ID and password are correct, the account not disabled, the error is firewall. Please help me how to resolve it. | http://itknowledgeexchange.techtarget.com/itanswers/page/492/?tt_nocache=1%5C%5C%5C%5C%5C%5C%5C%5C%27%22 | CC-MAIN-2017-22 | refinedweb | 279 | 60.14 |
I have a photodiode connected to an ADS1015 A/D converter (many thanks to Roberthh for his modifications to the driver) and everything is working fine on the converter side. Basically I am measuring the time between pulses and writing the resulting delay in microseconds to an InfluxDB database. All subsequent calculations are done with a database query, eliminating lengthy calculations in code.
This works fine for 1-3 hours and then I get the dreaded ECONNABORTED (103) error.
I am pretty new to this and would appreciate comments on how to eliminate the error which I suspect has something to do with urequests.
I am simulating a load of 3.6kW with a Pyboard using the red LED which is flashing about 4 times a second.
Code: Select all
from machine import I2C, Pin, Timer, idle, disable_irq, enable_irq from ads1x15 import ADS1015 import urequests import utime import gc i2c = I2C(scl=Pin(5), sda=Pin(4), freq=100000) adc = ADS1015(i2c, 72, 3) adc.alert_start(4, 0, threshold_high=500,threshold_low=498,latched=False) server='' data = 'useconds value=' now_ms = utime.ticks_us() last_ms = utime.ticks_us() diff=0 pulsed=False alert = Pin(12, Pin.IN) def pulse(p): global now_ms, last_ms, diff, pulsed state = disable_irq() pulsed=True last_ms = now_ms now_ms = utime.ticks_us() diff = utime.ticks_diff(now_ms, last_ms) enable_irq(state) alert.irq(trigger=Pin.IRQ_FALLING, handler=pulse, hard = True) while True: try: if pulsed: ms = str(diff) send_data=data+ms resp = urequests.post(server, data=send_data) resp.close() pulsed = False gc.collect() gc.threshold(gc.mem_free() // 4 + gc.mem_alloc()) idle() except Exception as err: print("Exception", err) | https://forum.micropython.org/viewtopic.php?t=7224&p=41074 | CC-MAIN-2020-10 | refinedweb | 264 | 59.9 |
Ubuntu.Components.BottomEdgeRegion
Defines an active region within the BottomEdge component. More...
Properties
- contentComponent : Component
- contentUrl : url
- enabled : bool
- from : real
- to : real
Signals
Detailed Description
Bottom edge regions are portions within the bottom edge area which can define different content or action whenever the drag enters in the area. The area is defined by from and to properties vertically, whereas horizontally is stretched across bottom edge width. Custom content can be defined through contentUrl or contentComponent properties, which will override the BottomEdge::contentUrl and BottomEdge::contentComponent properties for the time the gesture is in the section area.
import QtQuick 2.4 import Ubuntu.Components 1.3 MainView { width: units.gu(40) height: units.gu(70) Page { header: PageHeader { title: "BottomEdge regions" } BottomEdge { id: bottomEdge height: parent.height - units.gu(20) hint: BottomEdgeHint { text: "My bottom edge" } // a fake content till we reach the committable area contentComponent: Rectangle { width: bottomEdge.width height: bottomEdge.height color: UbuntuColors.green } // override bottom edge sections to switch to real content BottomEdgeRegion { from: 0.33 contentComponent: Page { width: bottomEdge.width height: bottomEdge.height header: PageHeader { title: "BottomEdge Content" } } } } } }
Entering into the section area is signalled by the entered signal and when drag leaves the area the exited signal is emitted. If the drag ends within the section area, the dragEnded signal is emitted. In case the section's to property is less than 1.0, the bottom edge content will only be exposed to that value, and the BottomEdge::status will get the Committed value. No further drag is possible after reaching Commited state.
Note: Whereas there is no restriction on making overlapping sections, beware that overlapping sections changing the content through the contentUrl or contentComponent properties will cause unpredictable results.
Property Documentation
Specifies the component defining the section specific content. This propery will temporarily override the BottomEdge::contentComponent property value when the drag gesture enters the section area. The orginal value will be restored once the gesture leaves the section area.
Specifies the url to the document defining the section specific content. This propery will temporarily override the BottomEdge::contentUrl property value when the drag gesture enters the section area. The orginal value will be restored once the gesture leaves the section area.
Enables the section. Disabled sections do not trigger nor change the BottomEdge content. Defaults to false.
Specifies the starting ratio of the bottom erge area. The value must be bigger or equal to 0 but strictly smaller than to. Defaults to 0.0.
Specifies the ending ratio of the bottom edge area. The value must be bigger than from and smaller or equal to 1.0.
Note: If the end point is less than 1.0, ending the drag within the section will result in exposing the bottom edge content only till the ration specified by this property.
Signal Documentation
Signal triggered when the drag ends within the active bottom edge section area.
Signal triggered when the drag enters into the area defined by the bottom edge section.
Signal triggered when the drag leaves the area defined by the bottom edge section. | https://phone.docs.ubuntu.com/en/apps/api-qml-development/Ubuntu.Components.BottomEdgeRegion | CC-MAIN-2020-45 | refinedweb | 511 | 50.73 |
Why Gatsby & Sanity?
As I've spent more time developing, one thing I have been doing more often is building personal sites for other people. It's an easy way to keep practicing new frontend skills and techniques, as well as build up a portfolio that shows what you can do. One of the most annoying things about this, however, is creating an easy and custom way to manage content for those websites. There are tons of options out there like WordPress, but one of my favorite tools is Gatsby.
Why I like Gatsby
Gatsby is a super-fast static site builder with arguably the best documentation I've ever encountered. As a dev, it's an incredible platform. Gatsby is essentially data agnostic, meaning you can use nearly any data type to provide it with content. It could be a WordPress API, markdown, RSS feeds, it's great. Lately, my choice is for managing content easily is Sanity.
Why I Like Sanity
Sanity is essentially a super slick headless CMS/backend service. What's great about Sanity is that it comes with the Sanity Studio, a React application that you can run locally or host on a service such as Netlify. The beauty of it is that it allows you to define your own data types. The best way I can imagine explaining how it works (and why you would use it) is to compare it to a CMS such as Wordpress. With WordPress, you have pages and posts, and you can add tags and categories, but beyond that, you would essentially have to massage those types of data in an effort to create a site. That's confusing because the end-user could ultimately be dealing with having to manage data for their website that doesn't really describe their data appropriately. For example, if my wife (an actor) was using a platform such as Wordpress as a CMS and wanted to add a recent show that she performed to her resume, she might be making a new "Post" with a category. It works, but with Sanity, you could make your own data type called "Show" in the schema and that automatically creates a document type that could be edited by her (the end-user). For another example, I'm currently working on a site for a woodworker. They might want an area of their website that catalogs the different types of wood along with the wood properties for their readers. Using Sanity, I can make my own data-type for that. The great thing about Sanity is that you have the flexibility to create custom ways to store your data and manage your content (you can even edit the Sanity Studio itself because it's just a React app) all without having to set up your own database and manage all of that. For me, it's a happy space of having a custom CMS I can make for a particular client while not actually having to develop everything from scratch.
What We're Doing
In this post, I am going to be going over how to create a brand new, clean project in Gatsby and connecting it to Sanity as it's the primary data source. The Sanity site actually has a few starters you can use with just a few clicks, but it comes with a lot of pre-defined CSS, as well as the clutter of all the sample files that I don't necessarily want to deal with when starting a new project. Following these steps, you can grasp the basics of how Sanity and Gatsby work together and have a clean setup for both so you can use whatever you want for development, such as styling, file structure, etc. Here we go!
Steps
1 - Create Directory
So first we are going to create a directory for our project. I do this first because the CLI tools for Gatsby and Sanity are a little different and I also prefer to have both under a single git repo, so go ahead and make a directory named whatever, wherever you want. I named mine gatsby-sanity-starter.
2 - Create Sanity Studio
First, you need to install the Sanity CLI, so using npm run
npm install -g @sanity/cli to install the CLI globally. After that, make a new directory in your root directory and name it studio. From the studio directory, run
sanity init. The first time you do this you'll need to log in, create an account, and when prompted for a project, you are going to select Create a New Project and then for the project template you want to select Clean project with no defined schamas. This is going to create a brand new Sanity Studio inside the /studio directory.
3 - Create a Gatsby Site
If you have not already, install the Gatsby CLI with
npm install -g gatsby-cli. Next, you're going to go to your root directory and run
gatsby new web, which creates a new Gatsby project within the /web directory.
4 - Remove git from the Gatsby site
Next, we are going to remove git from the Gatsby site in the /web directory. This can be done by running
rm -rf .git from within the /web directory. I prefer to do this because I like having a single git repo for both the studio (in /studio) and the actual website (in /web). Alternatively, you could run
git init from within the /studio directory and have two separate repos for the studio and the website.
5 - Install gatsby-source-sanity
If you're new to Gatsby, a basic principle is that it uses plugins to incorporate various features. The plugin gatsby-source-sanity allows you to run GraphQL queries in your Gatbsy app against a GraphQL API route created by Sanity. You install this in a few steps.
- From within the /web directory, run
npm i gatsby-source-sanity --save.
- Add the plugin config to the file gatsby-config.js
// in your gatsby-config.js module.exports = { // ... plugins: [ { resolve: 'gatsby-source-sanity', options: { projectId: 'abc123', dataset: 'blog', // a token with read permissions is required // if you have a private dataset token: process.env.MY_SANITY_TOKEN, }, }, ], // ... }
- Within the plugin config, you need to replace the projectId and dataset with your Sanity project id and dataset name. You can find this in the /studio/sanity.json file.
6 - Create Your Schema
At this point, in order to check everything is up and running properly in terms of the connections between your Gatsby site and Sanity you'll need to create some type of basic Schema. When we created a new Sanity site, it is 100% void of any schema, so there isn't a way to see anything getting pulled into you Gatsby site. We will do this in 2 steps.
- Create the file blogPost.js and sponsor.js *in the schemas directory so you have */studio/schemas/blogPost.js and /studio/schemas/sponsor.js with the following.
// in blogPost.js export default { name: 'blogPost', title: 'Blog post', type: 'document', fields: [ // ... other fields ... { name: 'name', title: 'Name', type: 'string' }, { name: 'sponsor', title: 'Sponsor', type: 'sponsor' } ] } // in sponsor.js export default { name: 'sponsor', title: 'Sponsor', type: 'object', fields: [ { name: 'name', title: 'Name', type: 'string' }, { name: 'url', title: 'URL', type: 'url' } ] }
- Import blogPost and sponsor into the types in schema.js.
import createSchema from 'part:@sanity/base/schema-creator' import schemaTypes from 'all:part:@sanity/base/schema-type' import blogPost from './blogPost'; import sponsor from './sponsor'; export default createSchema({ name: 'default', types: schemaTypes.concat([ /* Your types here! */ blogPost, sponsor ]) })
If you want to check out more about creating schemas in Sanity, the docs are here.
7 - Deploy GraphQL API
Anytime you modify your schema, you have to deploy your schema again so that your GraphQL API knows what's available to query. Since we just added a blogPost type and a sponsor type we need to deploy the API. We do this from /studio with the command
sanity graphql deploy.
8 - Make Sample Blog Post
From /studio go ahead and run
sanity start, and you can open up the Studio from localhost:3000. Here you should see the "Blog post" under Content, and you can click on that, and then the + button to create your first post. Just type in some filler info and move to the next step.
9 - Build Gatsby Site
Gatsby is a static site builder, so we need to build the site. Do this from /web with
gatsby develop.
Once the site is built, you should be able to see the generic Gatsby starter page from localhost:8000, but if you head over to localhost:8000/__graphql you can see the graphQL playground. If all went well, there should be a few queries on the left that have Sanity in it, such as allSanityBlogPost. At this point, you can try running the query below and you should get data returned containing your blog post data you made.
query MyQuery { allSanityBlogPost { totalCount edges { node { _id name } } } }
That's It!
Hopefully, now you have a clean Gatsby site hooked up to a Sanity Studio! From here, you can develop a Gatsby site however you normally would while you have new custom headless CMS through Sanity. Let me know how it goes!
You can read more about Gatsby and Sanity from the source.
Discussion (10)
Thanks needed a reminder on what I did to get this setup with Gatsby before. One thing to note in your instructions
sanity graphQL deployneeds to be lowercase
sanity graphql deploy. And finally your importing
import sp from './sponsor';but your referencing it is sponsor lower down so it should be
import sponsor from './sponsor';Also I think you should be running
gatsby developnot build, that's generally left for when your hosting your site, or if you need to check specific build things like lighthouse tests.
Yes thanks! I made these changes
If anyone is getting issues on step 9 when running "gatsby develop" (such as Gatsby, React, or Sanity Types not found or missing) this is related to step 5 when the gatsby-source-sanity is installed. Some users have been having issues with NPM, but running the same via yarn seems to resolve and produce working builds for now.
More info here: github.com/gatsbyjs/gatsby/issues/...
This is exactly what I needed. Thanks!
Thanks for this, I was looking for a simple explanation of how to mess with Sanity and you checked all the right boxes!
The time it took me to read this and get it set up and running vs. the days I spent configuring WordPress to be a headless CMS. I'm kinda pissed.
Thanks for this excellent guide Ryan!
Of course! Love Sanity. Hope others can benefit.
Thanks Ryan! I appreciate the article!
What about Gatsby build time benchmark compared to Hugo? | https://dev.to/doylecodes/making-new-projects-with-gatsby-sanity-30nh | CC-MAIN-2021-49 | refinedweb | 1,813 | 70.84 |
Want to create an application which runs multiple functionality simultaniously? So now it is turn to learn threading with java. create multiple thread and do stuff like animation or multi processing with them.
The Concept of multi threading born from concurrent programming. Every Computer user uses so many applications at once so it was necessary for the software developers to provide some solution by which they could run multiple application simultaneously. This was obtain by using the concurrency. The application uses two trends Process based or thread based concurrency.
The Java Thread model
The Java thread model is based on the multi threading concept. The process is rather a big picture of an application but the thread is a smaller part. there could be more than one thread in a process. The Java Thread model treats everything as a thread. It is in the language specification of Java. When you run a Java program it creates a thread which is parent thread of all other threads in Java.
If you have made a guess, yes it is main method. It is the parent thread of all the threads. See the example as following program.
public class ThreadingDemo { public static void main(String[] args){ System.out.println(Thread.currentThread()); } }
Thread[main,5,main]
This tells you which is current thread, the format is Thread[thread-name,thread-priority,thread-type]. If you want to try some changes in it. See to following program.
public class ThreadingDemo { public static void main(String[] args){ Thread.currentThread().setPriority(7); Thread.currentThread().setName("test"); System.out.println(Thread.currentThread()); } }
Thread[test,7,main]
Now you see the name and priority has been changed.
So this was a basic thread model. But What if you want to make your own thread ?
Yeah that is easy and now I am going to tell you that. But wait, Wouldn't you like to know why you would like to create threads if JVM is already taking care of this.
Yes, JVM is creating threads for you but what if you want to create a game in which you want to run a character and the background, on the same time enemies and their fires. Yes, here Java threads will help you. Or if consider you are making an audio video application. There you will need to run video and audio function simultaneously. There again you'll need multiple threads. Don't worry its not too difficult and we'll also give you a sample animation to work on after we finish up with all the concepts of Threading. So lets now move on.
Creating Threads In Java
There are two ways you can create Thread, One by using Thread class as super class. And secondly by implementing the interface Runnable. Following two programming example will tell you how.
public class ThreadingDemo extends Thread{ public void run(){ System.out.println("yey!! My New Thread is running."); } public static void main(String[] args){ new ThreadingDemo().start(); } }
yey!! My New Thread is running.
Now did you noticed I called a start() method while I created a run method ?
Don't get confused. The Thread class comes with several method run() and start() are one of them. you start a thread by calling start() method on its object, which in this case is ThreadingDemo class object. You can call it on only the Thread type object but as ThreadingDemo is extending the Thread so here the inheritance principles comes into play.
When you call a start on the Thread, it executes the run() method. The run method is not created in ThreadingDemo but is Thread class method which is being overridden by the ThreadingDemo. Whatever function you want your thread to perform, you define it all in the run() method.
Now let us see another way for creating thread, that is implementing Runnable interface. Before we move on, let me tell you the reason why you will be following this method rather than extending thread.
Consider you are working on some program and you have a class hierarchy already defined for Inheritance purpose but now you need to create a Thread in your subclass. But wait, Java does not allow to extend multiple classes, what do we do then ?
If there is a problem, there is also a solution. The Runnable interface to the rescue. You can implement as many interfaces as you want. So you get a solution. To see how, please take a good look at following program.
public class ThreadingDemo implements Runnable{ private Thread th; public void startThread(){ th = new Thread(this); th.start(); } public void run(){ System.out.println("yey!! My New Thread is running."); } public static void main(String[] args){ ThreadingDemo td = new ThreadingDemo(); td.startThread(); } }
yey!! My New Thread is running.
Well, you see, it works. And is a good way to do the thing. The creation of thread object and starting it could have been more direct but I like to keep things proper and understandable. If you want just do it in constructor or however you want, I just wanted to make you understand the concept.
The run method in the program is now the only method provided with the Runnable Interface. The purpose is to make you able to create thread without the need for extending Thread class.
Thread Scheduling
The threads if you know by the means of their role in Operating system are part of system process. Every program want its own process to be executed first but it is up to CPU to help these processes and thread to work in concurrence. So Java also provide the utility method with the Thread class which allows you to do thread scheduling manually.
The sleep() method from the Thread class is used for the purpose. This method put your thread to sleep and allows another thread to execute*. Please look at following program to know the scenario.
public class ThreadingDemo implements Runnable{ private Thread th; public void startThread(){ th = new Thread(this); th.start(); } public void run(){ for(int i=0; i<5; i++){ try{ System.out.println(i); th.sleep(1000); } catch(InterruptedException ie){ System.out.println(ie); } } } public static void main(String[] args){ ThreadingDemo td = new ThreadingDemo(); td.startThread(); System.out.println("thread started"); } }
thread started 0 1 2 3 4
The thread now uses the sleep() method with 1000 millisecond as the argument. The argument passed in sleep() method is of type long. This throws InterruptedException so it is necessary to keep it inside try and catch block. This is one of the checked exception so if you don't do so, you'll get a warning from compiler.
Now when I run the program it produces the output, and you'll see that there is time delay of about a second in the successive printing of numbers in for loop. So this is how java allows you to take care of scheduling of your thread.
*Note: Please note that, JVM is also a system program running on the CPU so actually scheduling is handled by your CPU only. JVM acts as secondary Controller for the Java threads. So don't expect your program to run with the highest priority if you are setting the thread priority to the topmost**.
** Java defines priority from 1 to 10, which is also defined with constants, MIN_PRIORITY(1), NORM_PRIORITY(5) and MAX_PRIORITY(10). you can set the priority of thread using method setPriority().
There are more advance concept in the Java multi threading. I suggest practicing the basics first and than move on to Java Thread - Advance level multi Threading. | http://www.examsmyantra.com/article/51/java/the-java-multi-threading-basic-concepts-of-thread-model-in-java | CC-MAIN-2019-09 | refinedweb | 1,268 | 66.54 |
In this module, we will talk about what are Header files in C, their importance, and their uses. Till now, we have seen various concepts like Hello World Program in C, Input-Output Statement in C, and in all these, we have been seen that while writing a program we need a header file, without that we can’t proceed towards the program or it will simply throw an error in the console like header file not declared or a particular function is invalid.
So, Let’s see in detail the importance and uses of header files in programming.
What are Header Files in C?
In C Programming language, there are many predefined functions available which make the program very easy to write, and these predefined functions are available in the C library, to use all these functions you need to declare a particular header file because the header file then makes a call to C library and use that particular function in your program. You can include header file by C Pre-processing directives i.e., “#include”, and must contain a “.h” extension at the end.
For example, #include<stdio.h>, this is the header file which stands for standard input-output function, like this is the basic header file which is necessary for writing any program in C, as it allows you to perform input and output operations in C, like scanf( ) and printf( ) functions, and #include is the pre-processor directive.
By including the header file, we can use all its functions and contents in our programming. There are 2 types of header files present in C Programming.
Pre-existing Header files
These files are the files that are already available in the C compiler, we have to just import them, whenever we want to use them.
For example: #include <File_name.h>
User-defined Header files
When users want to define their own header, and that can be simply imported by using “#include”.
For example: #include “File_name.h”
So, these are the two ways through which we can use the header file in our program. The pre-processor directive i.e., “#include” is only responsible for telling the compiler that the header file needs to be processed before the actual compilation and must include all the necessary functions and data for the program to use. The user-defined data type searches for a file name in the current directory and then finds it calls them in the program.
What are the different types of header files available in C Programming?
There are many different types of header files available and used for different purposes, which makes our program easier, let’s see some of them in detail.
- #include <stdio.h>: It is the basic header that must be included while writing any program in C, as it allows you to perform input and output operations using scanf( ) and printf( ) functions.
- #include <math.h>: This header file allows us to perform different types of mathematical functions like:
sqrt(): It is used to find the square root of a particular number, if you want to find the square root of n, then it should be declared as sqrt(n);
pow(): This function is used to compute the power of a particular number like 23 = 8.
log2(): It is used to find the log of the number to the base 2.
- #include <string.h>: This function allows us to perform various functionalities related to the manipulation of string like:
strlen(): It is used to find the length of the particular string.
strcmp(): It is used to compare the two strings, it returns true or 1 if both the string are the same and return false or 0 if both strings are not the same.
strcpy(): This function is used to copy one string to another variable.
- #include <stdlib.h>: This stands for the standard library, which contains general utility functions.
- #include <time.h>: It is used for date and time functions.
- #include <ctype.h>: It includes various characters type handling functions.
- #include <conio.h>: It stands for console input-output operations, like it does not belong to C standard library, but used for performing console input-output operations.
- #include <float.h>: this header files limits to float types only.
So, these were some header files available with their meanings, Let’s see one example to get clearer:
#include <stdio.h> #include <math.h> int main() { int num1 = 25; int num2 = 3; int t=0, m=0; t = sqrt(num1); m = pow(t, num2); printf("t = %d\n ", t); printf("m = %d ", m); return 0; }
The output of the above program is:
In the above program, I have used two header files i.e., #include <stdio.h> and #include <math.h> for performing various mathematical operations like having computed the square root of 25 and then calculated 5 to the power 3 using pow( ) functions.
I hope, you all liked this particular module and got excited by knowing this particular concept and must be waiting for another module for a more exciting and interesting concept of C Programming.
Until then, keep learning, Practice Coding! | https://usemynotes.com/what-are-header-files-in-c/ | CC-MAIN-2021-43 | refinedweb | 853 | 59.33 |
C# 6.0 and the .NET 4.6 Framework pp 859-928 | Cite as
ADO.NET Part II: The Disconnected Layer
Abstract
The previous chapter gave you a chance to examine the connected layer and the foundational components of ADO.NET, which allow you to submit SQL statements to a database using the connection, command, and data reader objects of your data provider. In this chapter, you will learn about the disconnected layer of ADO.NET. Using this facet of ADO.NET lets you model database data in memory, within the calling tier, by leveraging numerous members of the System.Data namespace (most notably, DataSet, DataTable, DataRow, DataColumn, DataView, and DataRelation). By doing so, you can provide the illusion that the calling tier is continuously connected to an external data source; the reality is that the caller is operating on a local copy of relational data. | https://link.springer.com/chapter/10.1007/978-1-4842-1332-2_22 | CC-MAIN-2018-13 | refinedweb | 146 | 55.95 |
Dec 08, 2011 09:00 PM|sumu456|LINK
I've created a WCF project and hosted successfully in IIS 7.0 . When I browse to my service using the url the form is displayed.
Then in my website I add a service reference(MyServiceRefernce) using the above URL. The WCF exposes objects which is used by the client.
In my pages I import the namespace "using MyServiceRefernce" and call the methods on the proxy which returns the object.
For example
MyServiceRefernce.ServiceClient client = new MyServiceRefernce.ServiceClient();
Contact c = client.GetContact(contactId);
When I build my application and run from VS 2010 , everything runs fine. But when I run from IIS i get the following compilation error.
Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.
Compiler Error Message: CS0246: The type or namespace name 'MyServiceReference' could not be found (are you missing a using directive or an assembly reference?)
The website uses forms-based authentication .
Please help me fix this error. I am in urgent need.
Thanks
Dec 09, 2011 04:47 AM|PiyushJo|LINK
Have you tried to regenerate the service reference? You can also click the generated service reference, then click Show All files and then go to Reference.cs to see the exact namespace and class names you need to use to instantiate the proxy. I am guessing you refreshed the service reference which caused some changes. If it still doesn't work - send me your client code with generated service reference at piyush.joshi@microsoft.com and I'll take a look. Thanks.
Dec 09, 2011 02:58 PM|sumu456|LINK
This is what I noticed. My project is a website project. When I deploy under the default website , i get the compilation error that it is not able to find the WCF Namespace.
But when I add the website under the Sites folder and give a new port number, the project works fine..
How do i get it working under the default web site. I do not want to use the port number in the URL.
Thanks
Dec 09, 2011 03:14 PM|sumu456|LINK
The issue is similar to the one discussed in this thread
Dec 09, 2011 09:16 PM|sumu456|LINK
I finally managed to get the website up and running. But still it does not work correctly.
Everything works fine when browse through localhost ()
But when i browse the website using the machine name() it directs me to the login page. I get authenticated and redirected to the default page. The menu links does not work any more. I do not get any error .
Please help me troubleshoot the problem.
Thanks much.
Member
198 Points
Dec 10, 2011 03:26 AM|sujeet_kumar|LINK
hi
see this link , it is help full.
8 replies
Last post Dec 10, 2011 03:26 AM by sujeet_kumar | http://forums.asp.net/t/1747796.aspx | CC-MAIN-2014-42 | refinedweb | 491 | 67.35 |
RDFa
Contents
1 Definition Primer 1.0 Embedding RDF in XHTML, retrieved 15:17, 23 April 2007 (MEST)). Syntax, retrieved 15:17, 23 April 2007 (MEST))
RDFa is part of the semantic web initiative. As an alternative, see other microformats and in particular "semantic XHTML".
2 Example
This short example taken from the Primer shows that RDF data can be embedded within "rel" and "property" attributes.
- The clueless HTML>
- The RDFa version
As you can see it refers to two well known XML namespaces, i.e. ICAL and VCARD.
<html xmlns: ... <p class="cal:Vevent" about="#xtech_conference_talk"> I'm giving <span property="cal:summary"> a talk at the XTech Conference about web widgets </span>, on <span property="cal:dtstart" content="20070508T1000+0200"> May 8th at 10am </span>. </p> ... > ...
- The extracted RDF triplet
A RDF parser now could extract for instance the following information from this file:
<> rdf:type cal:Vevent; cal:summary "a talk at the XTech Conference about web widgets"^^XMLLiteral; cal:dtstart "20070508T1000+0200" .
3 Tools
- See also: RDFa Implementations (at rdfa.info). Lists various libraries and filters.
3.1 On line
- Zitgist dataviewer (a user friendly Semantic Web data viewer). Handles RDF and XHTML+RDFa.
3.2 Browser extensions
- RDFa Javascript implementation (you can install a bookmarklet to activate on given pages). Puts a red box around places with RDFa triples...
- Operator. (Firefox extension) Quote: "Operator leverages microformats and other semantic data that are already available on many web pages to provide new ways to interact with web services."
3.3 Filters
4 Links
4.1 Standards
- Standards
- Bodies (organizations)
4.2 Introductions
- The future of RDFa, bobdc.blog, by Bob Ducharme, Feb 2008.
- Bridging the Clickable and Semantic Webs with RDFa, ERCIM News, by Ben Adida, 2008 ?
- Introducing RDFa, By Bob DuCharme, XML.com, February 14, 2007
- RDFa in a nutshell by fabien gandon, INRIA. | https://edutechwiki.unige.ch/en/RDFa | CC-MAIN-2019-09 | refinedweb | 309 | 59.09 |
. Part 2 shows how to search a database of a content management system (CMS).
The PHP back-end used in this tutorial is just an example and is not the only way that we could perform the search. The back-end that the widget is connected to will differ depending on the structure of the website that it’s used on. The code used in this example would work well on a small to medium site with lots of static content. A data-driven or product-heavy site would probably make better use of a database-searching back-end, which would be equally as easy to code.
The PHP
We’ll look at the PHP first and then build around that. There’ll be a simple little script which begins at a specified directory and then spiders down through any subdirectories, collecting the URLs of all of the pages within the tree.
We’ll then need to search though each page to see if it contains the term that the visitor has searched for, and make a note of its URL if it does. Finally we can convert the information to JSON format for easy processing in the browser. Let’s make a start; in a new page in your text editor add the following code:
<?php //function to get all files in a tree function searchFiles($startDir, $urls = array()) { } ?>
Most of the functionality in the PHP will be within the function we define here; the searchFiles function accepts two arguments, the first is the directory to start searching in, the second is an array. Next we need to add the spidering and searching logic, all of which can go into this function; within the function, add the following code:
//get search term from POST $term = $_GET["term"]; //scan starting dir $contents = scandir($startDir);
First we get the search term, which will be passed to the file as part of the GET request. We aren’t using a database so in this example, I haven’t focused on any security measures.
We then use the native PHP function scandir which will read the contents ofsc a specified directory. The directory to scan is obtained from the first parameter passed to the function. The return value of scandir is stored in the $contents variable and will be an array.
Directly after the code we just looked at, add the following code:
//loop through each item for ($x = 0; $x < count($contents); $x++) { //build path to each item $path = $startDir . "/" . $contents[$x]; if(is_dir($path)) { //if item is a dir //skip these if($contents[$x] !== "." && $contents[$x] !== "..") { //recursively call function to open sub dirs $urls = searchFiles($path, $urls); } } elseif(is_file($path)) { //if item is a file //only get HTML files $chunks = explode(".", $contents[$x]); if ($chunks[count($chunks) -1] == "html") { //open file $handle = fopen($path, "r"); $fsize = filesize($path); $fileContents = fread($handle, $fsize); //get text from page $pageChunks = split("<title>", $fileContents); $title = split("</title>", $pageChunks[1]); //strip tags from each file $cleanContents = strip_tags($fileContents); if(stristr($cleanContents, $term) === FALSE) { continue; } else { //trim start of string $trimmedPath = substr($path, 2); //add to matching URLs array $urls[] = array("url" => $trimmedPath, "title" => $title[0]); } } } }
The for loop, which encapsulates quite a bit of functionality, cycles through each item in the array returned by scandir. Remember that at this stage, it’s just the contents of the starting directory that we’re working with.
We first build the path for each item by concatenating the starting directory with a forward slash and the current filename. This is needed so that we can access files that are within any subdirectories inside the starting directory.
Next we check whether the current item is a directory using the native PHP is_dir function. If it is a directory, we ignore the current (.) and parent (..) directories, so that we only get subdirectories, and then recursively call the searchFiles function again, passing in the $path variable as the directory to search and the $urls array if it exists. It will only exist if the function has already been called, and if it does exist, will contain the URLs of any pages that have already been searched and found to contain the search term.
If the current item is a file, which we confirm with the is_file function, we then check the extension of the file to see whether it’s a file type that we want to search. We obviously don’t want to search script files or CSS files, or any other resource that doesn’t contain content. In this example we’re just searching HTML files. We can check the extension by exploding the string using the period as the separator, and then looking at the last item in the resulting array.
If the current item does have a HTML extension, we open the file in read-only mode and store the entire contents of the file, tags and all, in the $fileContents variable. We use the filesize PHP function to ensure that we read the entire file into the variable. The contents of the file will be stored as one long string.
Next we want to get the title of the page so that we can use this if the file does contain the search term. We can do this easily by first exploding our giant string on the
<title>
string. We then explode the remaining string on the
</title>
string, which will give us the title of the page, which we store in the $title array.
After this we can further prep the string of the file’s content for processing by removing all of the HTML tags from it. This means that only the content of the page will be searched. Once we have a clean string, we can then see if the search term is within the string using the case-insensitive stristr function.
If the page does contain the search term we then tidy up the path to the file by removing the first two characters (the ./) as we won’t need these to link to the file. Finally we add the URL of the file and the title of the page as a new item in the $urls associative array.
Once the function has finished executing we can return the associative array:
//return array of filenames return $urls;
We still have a couple of tasks to complete with PHP; directly after the searchFiles function add the following code:
//set starting dir as current dir $startDir = "."; //call function initially $urls = searchFiles($startDir); //delay response sleep(1); //convert to JSON obejct and echo to page $response = $_GET["jsoncallback"] . "(" . json_encode($urls) . ")"; echo $response;
We first set the current directory as the starting directory; the $startDir variable is passed to the searchFiles function the first time it is called, which we do next, storing the return value (our associative array) in the $urls variable. If no matches to the search term are found the array will still be created, but it will be empty, which we can test for in our JavaScript a little later on.
We also use the PHP sleep function to delay the response by a single second; we probably wouldn’t need to do this in a real implementation as the delay between the browser and server would be likely to be more than this anyway, but for the purpose of this example delaying the response allows us to see the loading spinner that we’ll be using and just seems to make the example work better.
Finally, we convert the $urls array into a properly formatted JSON object using PHP’s native json_encode function, and wrap the object in braces and a Get request, which we then echo back to the page. Save this file as search.php. It will need to go into the root directory of the site that it is to be used on.
JSON
JSON is a lightweight and efficient mechanism for transporting data across a network. It’s generally quicker and easier to work with than XML, but has yet to achieve the same level of adoption as XML. A technique known as JSONP, which jQuery natively supports, allows us to process the data completely within the browser and completely free of the standard cross-domain exclusion policy. This makes accessing and reusing content from remote domains much simpler.
Our PHP file will convert a standard PHP associative array into a literal JSON object containing an array. Within each item in this array will be another object, and the data returned from our function will appear as property values within this nested object. We haven’t written the jQuery which will process the object yet, but the following screenshot shows how the response object will appear in Firebug so that you can visualize the structure of the JSON:
Some jQuery and HTML
Now we can create the page that will call the server-side script that we just created when a search is performed; in a new file in your text editor, add the following"><button id="search" class="text">Search</button> </div> <script type="text/javascript" src="jquery-1.3.2.min.js"></script> <script type="text/javascript"> </script> </body> </html>
We link to a stylesheet, which we’ll crate shortly, and jQuery. On the page we just have the search UI which consists of a container, a label, an input and a button, nothing else. The rest of the elements that the search widget uses we’ll create as and when we need them. We also left an empty script element at the bottom of the page; within these tags add the following code:
$(function() { });
This is the jQuery short-hand way of specifying a function to execute on page load. All of the logic to pass the search term to the server and process the response will lie within this anonymous function. Now add the following code to the function:
//hide noResults message if present ($("#noResults").length > 0) ? $("#noResults").fadeOut("fast", function() { $(this).remove(); }) : null ; //hide error if present ($("#error").length > 0) ? $("#error").fadeOut("fast", function() { $(this).remove(); }) : null ; //hide results if present ($("#results").length > 0) ? $("#results").fadeOut("fast", function() { $(this).remove(); }) : null ; //hide success icon if present ($("#success").length > 0) ? $("#success").fadeOut("fast", function() { $(this).remove(); }) : null ;
We first need to check whether the no results or error messages, or any previous results, are showing in case the visitor has already interacted with the UI. We can also check for the success icon that may have been appended to the widget.
If any of these (which we’ll add the code for imminently), exist, which we test for by seeing whether jQuery returns anything when we check for it using an id selector, we simply hide them using the fadeOut animation and then remove them from the page using a callback which is executed once the animation finishes.
Next we check that the text input for the search term isn’t empty:
//check input not empty if($("#query").val()) { } else { //show error message $("<p>").addClass("text").attr("id", "noResults").text("Sorry, no results found").appendTo("#container").slideDown("fast"); }
If the input is empty, we can create and display a simple error message that prevents empty submissions and alerts the visitor to the error. We can show the error using the slideDown animation. The following screenshot shows how the error will appear:
The first branch of the conditional however is where the majority of the processing takes place; add the following code within the first part of the if statement:
//disable button and input $(this).attr("disabled", "disabled"); $("#query").attr("disabled", "disabled"); //add spinner $("<img>").attr({ src: "spinner.gif", id: "spinner" }).css({ position: "absolute", top: 27, right: 80 }).insertAfter("#search"); //get search term and build querystring var term = $("#query").val(), query = "term=" + term;
First we disable the button to prevent multiple submissions of the same search term. We also disable the input as a visual cue to the search term. Next we can add a little loading spinner so that it appears as if the page is doing something while it waits for the response. Loading spinners are cool and can be downloaded in a ready-to-use format from a variety of sites. We add it to the page and simply position it so that it appears to be inside the input. We then get the value entered into the input, and build the query-string that will be passed to the server. The following image shows how the spinner will appear:
Next we can make the request using jQuery’s getJSON method:
//request JSON object of matching urls $.getJSON("search.php?jsoncallback=?", query, function(data) { });
We provide three arguments to the method; the first is the URL of our PHP file with the JSONP query-string attached. The name that we specify after the first question mark should match the name we assigned to the superglobal passed back in the GET request from the server. Adding =? after this enables JSONP. jQuery will automatically pass the response object to the anonymous function that we pass to the getJSON method as the third argument. The second argument is the data we want to send to the server initially, which of course is the search term.
The anonymous function will be executed when the response from the server is received, let’s add the code for it next; within the curly braces add the following code:
if (data.length > 0) { //add success icon $("<div>").attr("id", "success").insertAfter("#container > h2").fadeIn("fast"); //create container for results $("<div>").attr("id", "results").css("display", "none").appendTo("#container"); //add message $("<p>").addClass("text").text("The following pages contain the search term:").appendTo("#results"); //create list $("<ul>").attr("id", "resultList").appendTo("#results"); //process response for (var x = 0; x < data.length; x++) { //create list item var li = $("<li>"); //create result $("<a />").addClass("result text").attr("href", data[x].url).text(data[x].title).appendTo(li); li.appendTo("#resultList"); } //show results $("#results").slideDown("fast");
We first check that the length of the response object is greater than zero; if it is we create a success icon, which will be added directly after the h2 element. We’ll be adding some other icons later on in the CSS, but this one needs to be created here. The icon will appear as in the following screenshot:
We also create a container element for the results and append it to the main container of the widget, although we don’t show it straight away. We then add a success message to the results container. The final pre-processing DOM insertion we do is to create a new unordered list element.
Next we use a simple for loop to cycle through each item in the array within our object. Each item will represent one page which contains the search term, so all we do on each iteration is create a new list item, and create a link to the page, using the URL for the href of the link, and the page title as the text of the link (we could also use it for the title of the link). The link is then appended to the list item, and the list item to the list.
Once we’ve been through each item in the array we then show the results using a nice slideDown animation. Now we need to cater for if the response object is just an empty array. This will mean that no pages contain the search term:
} else { //show error message $("<p>").addClass("text").attr("id", "noResults").text("Sorry, no results found").appendTo("#container").slideDown("fast"); }
All we need to do is show an error message indicating that no matching pages were found. We append it to the page and again show it with an animation. The final bit of jQuery will be executed regardless of whether results are returned or not:
//remove spinner $("#spinner").remove(); //enable button and input $("#search").attr("disabled", ""); $("#query").attr("disabled", "");
We simply remove the loading spinner and re-enable the button and input so that further searches can be carried out. At this point either the list of results or the failure message should be visible. This brings us to the end of the jQuery code; we just need to add some CSS now and our widget should be complete. Save the file as search.html.
A Little Styling
We now just need to add a little CSS to complete the widget; a lot of the styles are purely for decoration, but some of them contribute to how the widget behaves. The list of the search results will appear to drop down from the bottom of the search container like a menu and will overlay any other content that may be on the page, so some of the styles are required.
In a new page your text editor,; } #results { position:absolute; width:218px; padding:0 20px 14px; background-color:#4a4747; left:-2px; top:80%; display:none; } #resultList { margin:0 0 10px; padding:0; } #resultList li { list-style-type:none; } #resultList li a { text-decoration:none; } #resultList li a:hover { text-decoration:underline; } #noResults { position:absolute; margin:8px 0 0; display:none; left:20px; top:55px; background:url(cross.gif) no-repeat 0 1px; padding-left:18px; }
Save this file as search.css. The first selector is a class for all of the text in the widget and is just more convenient than setting all of the same rules on each of the text elements individually. Other than that we set some sizes and some positioning. Most of the elements that we create dynamically are initially hidden so that we can animate them in instead of just showing them instantly.
The widget has enough space in it for the no results and error messages, so these are positioned absolutely within the outer widget container. The drop-down results menu is also positioned absolutely, so that it doesn’t push other page content around, and is aligned to the left and bottom edges of the widget container. We also add the icon images here too.
Summary
This is now all of the code we need to create the fully functional widget. To give the example more impact, the code download contains a fake site with plenty of content that we can search for. All you need to do is drop the main folder into a content-serving directory of a web server that has PHP installed and configured.
We should find that if we hit the button without entering a word in the text field we see the error message, and if we search for something that isn’t found we see the no results message. When we search for a term that is found, we get to see the list of results drop down and can navigate to any page that is listed. The next screenshot shows how the results menu will appear (I’ve also added some dummy content to the search page so that we can verify that the menu will overlay other content without messing with the page flow):
Part 2
A little while ago we’re going to do the same thing, but this time instead of navigating folders and subfolders looking for pages, we’re going to search a database instead. This type of search is aimed at sites which are created dynamically from database content, such as many popular blogging platforms.
Overall we’re going to be doing the same kind of thing; we’ll capture the search word from an input, request a PHP file, passing in the search word. We’ll then look for the search term and pass back a list of URLs of pages that contain the search term as a JSON object. We can then process the response and build a list of results to display (or provide a message if the term was not found).
Believe it or not, searching a database for content is actually easier and requires less code than recursively and exhaustively searching through directories and subdirectories, so for roughly the same amount of code, we can do the same thing as before, but add some new features such as sorting the results, and making the list keyboard-navigable. For reference, we’ll be using MySQL version 5.0 and PHP version 5.2.
The topics we’ll be covering are summarised below:
- Creating and populating a database with the MySQL CLI (Command Line Interface)
- Reading a database with PHP
- Searching through text with PHP
- Working with PHP arrays – multidimensional and associative
- Creating JSON and passing it to the browser in a way so that JSONP can be leveraged
- Processing JSON with jQuery
- Working with Keyboard events in jQuery
Getting Started
You may not have read the previous tutorial, or have the source files from that example to hand. Don’t worry; you don’t need them. We’ll look at every line of code that needs to be written as if this were a standalone article. Having read the previous tutorial is not a requirement for reading this one, although, it will give you a better grounding.
The first thing we should do is create a working environment; we’ll need a full web server installed and configured, with PHP and MySQL also available. Within the content-serving directory of the web server create a new folder called dbSearch. This is the project folder and is where all of the different resources we create will be stored. At this point, all that needs to go into this folder is the latest version of jQuery (1.3.2 at the time of writing).
Creating the Database
When deploying this widget on a blog or data-driven site, the database containing the content will already exist. For the purpose of this tutorial however, we’ll need to set one up. We can do this quickly and easily using the MySQL CLI. In the CLI (having opened it and entered the password) type the following command:
CREATE DATABASE dbSearch;
Note that the CLI is not case-sensitive, but a lot of people, especially those new to MySQL find that capitalising the commands helps distinguish commands from values and other identifiers. This command will create a new database called dbSearch. Once created, we should select the database for use with the following command:
USE dbSearch
The USE command is one of the few, possibly only, commands that don’t need to end in a semi-colon. Next we should create a table to store the data in. We can do this with the CREATE command:
CREATE TABLE pages(url TINYTEXT, title TINYTEXT, content MEDIUMTEXT);
This command will create a new table called pages and add three columns to it; a url column which we’ll use to store the relative filename for each page in, a title column which we can store the page titles in, and a content column which we’ll store the page content in. We use variants of the TEXT data type for the data that we’ll add to the table; the first variant TINYTEXT accepts up to a maximum of 255 characters, not a lot in the grand scheme of things, but easily enough for any of the URLs or page titles we’ll be using in this example.
The content column is set to the MEDIUMTEXT data type, which allows for up to 16777215 characters (around 15 MB). In a proper implementation we’d probably want to use LONGTEXT, which allows for up to roughly 4 GB of text. Remember that depending on the platform your blog is built on, the database and tables will probably already exist and be configured. At this point, the CLI window should appear something like the following screenshot:
This is another step that will not need to be completed when deploying to a live site as the database will already contain this kind of information. For the purposes of this tutorial however, we need some fake data to search through.
There are a range of different methods for entering data into a database table; we could manually enter the data one record at a time, which is ok for small datasets but probably a little monotonous for the amount of data we’ll be entering in this example. Instead we’ll feed a text file to our database to populate it with some content. The text file is included in the code download for this example.
When loading data into a database using a text file, the contents of the file needs to be structured in a specific way. Each line corresponds to a single record, and the data for each column should be separated by tab spaces, just like the following example:
afilename.html The Title Some content, lorem ipsum etc, etc.
Having created the text file (or having extracted it from the code download), we need to tell MySQL to consume it and use the data within it to populate our table. The following command will achieve this:
LOAD DATA LOCAL INFILE ‘c:/apache site/dbSearch/tableData.txt’ INTO TABLE pages LINES TERMINATED BY ‘\r\n’;
We simply tell the server which file contains the data, the table it is to be loaded into and how each line in the file is terminated. The string specified as the line terminator will vary depending on the platform that is used to create the text file.
To ensure that the data is loaded into the table correctly, we can check it using the SELECT command:
SELECT * FROM pages;
This should simply select all of the data in the table and output it to the CLI, as shown in the following screenshot:
Each column and record is separated by the pipe character |.
The Server-Side PHP
Now that we have our data source we can work on the file that will interact with it – the server side PHP file. Coding this file next means that when we come to do the jQuery that will bring everything together, the data will be available for us to use. In a new file in your text editor add the following code:
<?php //db connection detils $host = "localhost"; $user = "root"; $password = "your_password_here!"; $database = "dbSearch"; //get search term from GET $term = " " . $_GET["term"] . " "; //make connection $server = mysql_connect($host, $user, $password); $connection = mysql_select_db($database, $server); //query the database $query = mysql_query("SELECT * FROM pages"); //loop through and return results for ($x = 0, $numrows = mysql_num_rows($query); $x < $numrows; $x++) { $row = mysql_fetch_assoc($query); if(substr_count(strtolower($row["content"]), strtolower($term)) === 0) { continue; } else { $urls[] = array("url" => $row["url"], "title" => $row["title"], "occurs" => substr_count(strtolower($row["content"]), strtolower($term))); } } // Comparison function function cmp($a, $b) { return ($a["occurs"] > $b["occurs"]) ? -1 : 1; } // Sort the array uasort($urls, "cmp"); //set GET response and convert data to JSON $response = $_GET["jsoncallback"] . "(" . json_encode($urls) . ")"; //delay response sleep(1); //echo JSON to page echo $response; ?>
Save this file as search.php in the dbSearch folder. This is much less server-side code than we needed in part one - using the database really helps to streamline our code here. Let’s walk through the file and see what we do.
The first four variables are used to store the connection information that we’ll need to supply in order to interact with the database. Don’t forget to change the password variable to the one you use to sign in to the MySQL CLI. Next we obtain the search term that will be passed to the script by the page. We enclose the search term within blank spaces so that only the word by itself is matched, and not words within other words. For example, if we didn’t do this a search for hole would also match whole.
Next we connect to the MySQL server and select the database using the variables we just defined. We also query the database, selecting all of the rows of data in the table. We use the same command for selecting the data as we did when using the CLI earlier.
We then loop through each row of data returned by the query using the PHP for loop. The loop accepts four arguments; a counter variable $x, the total number of rows from the table, the condition for the execution of another iteration of the loop (while the counter variable is less than the total number of rows), and the counter increment which will increase the value of the counter variable by 1 on each iteration of the loop.
Searching the Data
The first thing we do within the loop is store the current row of data in the $row variable using the mysql_fetch_assoc PHP function which returns an associative array where each column in the table row appears as an item in the array. The column name is the label used to access each item in the array.
We then search the content item in the array using an if statement and the substr_count PHP function to see if the search term occurs 0 times. If it does occur 0 times we simply continue to the next iteration of the loop. If the string occurs more than 0 times we then create a new multidimensional array called $urls and add to it the url and title items of the array and a new item called occurs, which is the integer returned by the substr_count function. Each item in this array is itself an associative array. We make use of the strtolower function to make the search case insensitive.
The result of this code is a multidimensional array where each item is an associative array containing the URLs, page titles and the number of times the search term was found of pages whose content contains the search term. One thing we can do next to really add value to the search is to sort the array so that the first item in the array contains the highest number of occurrences of the search term, giving each result a ‘rank’. We can do this very easily using the uasort PHP function.
The uasort function expects 2 arguments; the array to sort and a custom function where the items in the array are compared. The cmp function is a custom comparison function which accepts 2 of the items from the array passed to uasort. The function will then return false or true if the value of the first occurs item is greater than the value of the second. The uasort function will automatically convert the outer array into an associative array and each item will be given its original pre-sort index number as a label.
We then wrap this array in parenthesis and convert it to a JSON object. Each property of this object is a nested JavaScript object that we’ll be able to process quickly and efficiently within the browser. Just before we echo back the JSON object we use the PHP sleep function to delay the response by 1 second; this part of the script shouldn’t be used if and when this widget is deployed. I’ve just found when running this example locally that it works better with this delay. After the delay we echo back the JSON object. We haven’t created the page that will interact with this file yet, but the following screenshot shows the structure that the JSON object will take:
The JSON Data Structure
JSON is a subset of JavaScript that allows us to define simple or complex objects and arrays. The values of these data structures can be any of a number of different data types including strings, numbers, Boolean or null values and can even be other objects, as in this example. PHP’s native json_encode function works by preserving standard arrays so that they remain as arrays, but converting associative arrays to objects. When this occurs, the label of the associative array item is used as the name of the property.
The structure of the JSON object that we’re using in this example is different than the structure we used in part one of this tutorial. The reason for this, as I explained earlier, is because of the additional data supplied by the uasort function. Previously, our JSON object was an array, which we were able to iterate through rapidly and easily. The fact that our JSON object’s structure has changed doesn’t make it any harder to get at our data, as we’ll see shortly.
For reference, we can easily return our JSON object to an array by wrapping the array within the array_values PHP function inside the json_encode function, like this:
//set GET response and convert data to JSON $response = $_GET["jsoncallback"] . "(" . json_encode(array_values($urls)) . ")";
The code that we’ll be using to process our JSON object in this example however is extremely flexible because it can be used to access the data in our new JSON format, but the exact same code can be used to access the array format that the JSON took in part 1.
Styling the Search Widget
Next we can define the stylesheet for our widget; in a new file; } #noResults { position:absolute; margin:8px 0 0; display:none; left:20px; top:55px; background:url(cross.gif) no-repeat 0 1px; padding-left:18px; } #results { position:absolute; min-width:254px; background-color:#4a4747; border:2px solid #373434; border-top:0; left:-2px; top:80%; display:none; } #results p, #resultList li { padding-left:20px; } #resultList { margin:0 0 10px; padding:0; } #resultList li { list-style-type:none; white-space:nowrap; } #resultList li a { text-decoration:none; } #resultList li a:focus { outline-color:#079d67; } .selected { background-color:#079d67; }
Save this file as search.css in the dbSearch folder. Let’s look at the styles that we define; first we create a class for text elements so that the various bits of typography are consistently styled. The next four id selectors target the default elements that appear in the search widget when the page initially loads. Pretty much all of the styles set by these rules are arbitrary and have been decided by me for the purposes of this example. This is how it’ll look when the page loads:
The #error and #success selectors are for the different types of feedback that the user may receive, such as the message that is shown when no search term is entered, the message that is shown when the search term hasn’t been found, or the icon that is shown when results are found. Again most of the styles used for these elements can be changed according to your preference. The most important rule in each of these is display:none; which of course hides them until they are ready to be shown. The following screenshot shows the error message:
The remaining rules are all used to style the list of results that is produced when the search term is found in the data store. We don’t know beforehand how long each result is going to be, so we use the min-width rule to allow the width of the result list to grow. The white-space:nowrap rule also prevents the width of each of the results from being restricted.
Again, a lot of the rules here are used to set this particular skin, nothing more, so you can change the appearance of the widget easily without preventing it from working. The menu is positioned absolutely so that it does not interfere with other elements on the page. When the results are presented the first result is focused and has the selected class applied to it. We set the focus outline of the link to the same color as the selected class instead of removing the focus outline for accessibility reasons. The following screenshot shows how the results list will appear:
Creating the Page Shell
We’ll look at the underlying page first; in a new file in your text editor create the following" tabindex=1><button id="search" class="text">Search</button> </div> <p>Lorem ipsum dolor...</p> <script type="text/javascript" src="jquery-1.3.2.min.js"></script> <script type="text/javascript"> $(function() { }); </script> </body> </html>
The page is as simple as possible, having just the search widget and some layout text present. The text is there to show how the result list overlays any existing page content (although there is no provision for overlaying flash content or select boxes). The widget itself is also very minimal, containing a simple heading, the search input, and a button. The rest of the elements that we’ll need we can create as and when necessary.
We link to our stylesheet in the head of the page, as well as a local version of jQuery at the end of the body. After this, we leave a script element containing the standard jQuery document.ready function which is where the bulk of our code will reside. We can add this next; there’s a lot, so after the following code sample we’ll look at what each bit does individually:
//click handler for button $("#search").click(function() { /; //request JSON object of matching urls $.getJSON("search.php?jsoncallback=?", query, function(data) { if (!data) { //show error message $("<p>").addClass("text").attr("id", "noResults").text("Sorry, no results found").appendTo("#container").slideDown("fast"); } else { /(); }); } //remove spinner $("#spinner").remove(); //enable button and input $("#search").attr("disabled", ""); $("#query").attr("disabled", ""); }); } else { //display error ($("#error").length > 0) ? null : $("<p>").attr("id", "error").addClass("text").text("Please enter a search term!").appendTo("#container").slideDown("fast"); } });
All of this code is encapsulated within a click handler for the search button; within the anonymous function we pass to the click helper method there are several distinct sections. Let’s look at each of them in turn.
Housekeeping
Our first task is to tidy up, as this may not be the first time the button has been clicked and there may be elements left over from previous interactions. There are four things we need to look for and remove if they are present:
/ ;
We can test whether each of these elements exist using the JavaScript ternary construct; if they do exist we fade them out, if they don’t exist we do nothing.
Pre-Search Processing
Next there are a few things we need to do before we perform the actual search; we first check that the input field in the widget does have a value and if it doesn’t, we show the error message:
/; } else { //display error ($("#error").length > 0) ? null : $("<p>").attr("id", "error").addClass("text").text("Please enter a search term!").appendTo("#container").slideDown("fast"); }
There is actually a lot more code inside the first part of the conditional, but this is everything we do before we make the request; we first disable both the button and the input element, to prevent multiple submissions while a request is in progress. We also add an AJAX spinner so that the visitor knows that something is happening behind the scenes.
The last thing we do before actually making the request is to prepare the data that is to be sent to the server, which is the search term entered into the input field.
Requesting and Handling the Response
We’re now ready to request and process the response; making the request is easy with jQuery’s getJSON method:
//request JSON object of matching urls $.getJSON("search.php?jsoncallback=?", query, function(data) { });
The method takes three arguments; the URL of the server-side resource that will receive the query and return the data. The second argument is the data to send to the server, and the last one is an anonymous callback function which will be executed if the request is successful. The code that goes into this anonymous function is used to process the response and display the results:
if (!data) { //show error message $("<p>").addClass("text").attr("id", "noResults").text("Sorry, no results found").appendTo("#container").slideDown("fast"); } else { } //remove spinner $("#spinner").remove(); //enable button and input $("#search").attr("disabled", ""); $("#query").attr("disabled", "");
In this section we first check that there is data in the response; if the search term isn’t found in the database null will be returned and if this is the case we can create and show a message. The message will appear like this:
After we’ve processed the object and displayed either the ‘no results’ message or the results, we can then remove the AJAX spinner, and enable the button and input to allow for additional searches to be performed.
Displaying the Results
If the search term is found, we’ll have a JSON object to process and results to create and display:
/(); });
First we need to create a few new elements; we create a success icon, which is inserted into the widget and positioned so that it appears next to the widget’s title (it just looks out of place anywhere else), and a container element for the results. We also create a message stating that the following list contains the search results.
The order of the items in the list is important in this context, so we create an ordered list element which we’ll populate in just a moment. The search results are in descending order within the JSON object, with the highest ranking (i.e. the page that contains the highest number of occurrences of the search term) item at the top.
We then use a for in loop to iterate over each property within the JSON object; on each iteration we create a new list item, then a new anchor element. We give the link some class names, so that they pick up the appropriate styles. We set some of the attributes of the link element using various values of the properties of the inner objects within the JSON object.
This is achieved using a combination of bracket and dot notation. The object consists of a series of properties and the value of each property is another object. Within each inner object there are another series of properties whose values contain our data. To access each property within the outer object we use the variable prop, which is defined in the loop, using square bracket syntax: data[prop] and to access our data, we simply append the name of the property who’s value we’d like to obtain: .url for example.
We also add the text content of the anchor element using data from our object before finally appending it to the list item. Following this the list item is appended to the list. One the loop has ended and all of the list items have been created and appended, we then need to give the first and last list items specific class names so that we can easily reference them later on in the script.
In the final part of this section of code we slide the results list into view and then select the first item in the list, giving it a class name and focusing it. This is pretty much where we finished off in part one of this tutorial, but now we’re going to add keyboard navigability to the results.
Enabling Keyboard Navigation
We attach our event listener to the anchor within the list item that has the class name selected, which we applied to the first item in the list; this way the widget will only be listening for events when it is relevant to do so. We attach the listener using jQuery’s live method so that we don’t have to keep rebinding to the event whenever we show the results:
//listen for keyboard events $(".selected").find("a").live("keydown", function(e) { });
Whenever the keydown event is detected the anonymous function is executed, and is automatically passed the event object. Before we get on with moving the selection to the next item in the list, there are a couple of things we need to do:
//prevent default browser behaviour for tab key (e.keyCode == 9) ? e.preventDefault() : null ; //close results if escape key clicked (e.keyCode == 27) ? $("#results").fadeOut("fast", function() { $(this).remove(); $("#query").val(""); }) : null ;
First we need to check whether the key that was pressed was the tab key, as this has its own default behavior that needs to be disabled. We do this by using the JavaScript ternary to see whether the keyCode property of the event object is equal to 9. If it does we prevent the browser’s default behavior using the preventDefault method.
We can also check whether the escape key was pressed and if it was we can close the results list and reset the value of the input field.
Next we need to do different things depending on whether the currently selected list item is the first or last item in the list, which we can test using the class names we added earlier:
if($(this).parent().hasClass("first")) { //program up and down arrow keys and tab to move highlight (e.keyCode == 40 || e.keyCode == 9) ? $(this).parent().removeClass("selected").next().addClass("selected").children(":first").focus() : (e.keyCode == 38) ? $(this).parent().removeClass("selected").parent().children(":last").addClass("selected").children(":first").focus() : null ; } else if($(this).parent().hasClass("last")) { //program up and down arrow keys and tab to move highlight (e.keyCode == 40 || e.keyCode == 9) ? $(this).parent().removeClass("selected").parent().children(":first").addClass("selected").children(":first").focus() : (e.keyCode == 38) ? $(this).parent().removeClass("selected").prev().addClass("selected").children(":first").focus() : null ; } else { //program up and down arrow keys and tab to move highlight (e.keyCode == 40 || e.keyCode == 9) ? $(this).parent().removeClass("selected").next().addClass("selected").children(":first").focus() : (e.keyCode == 38) ? $(this).parent().removeClass("selected").prev().addClass("selected").children(":first").focus() : null ; }
Within each branch of the conditional we also need to react differently depending on which key was pressed; we’re targeting the up and down arrow keys, which move the selection up or down the list respectively, and also the tab key, which will move the selection down the list. We use a nested ternary conditional for its compact syntax, which is equivalent to an if else statement.
Each branch of the outer conditional contains very similar expressions; let’s walk through the first one to see what’s going on. The first part of the ternary checks for the down arrow key or the tab key, if either of these is detected we navigate up from the anchor, which is in the context of $(this), to the parent list item and remove the selected class name. Then we navigate to the next sibling list item and give it the class name selected. We then navigate down to its first child, which will be the anchor element, and focus it.
If neither of these keys is detected we then check for the up arrow key, represented by 38. We want the selection to cycle through the list as if it were a menu, so if the up key is pressed while the first item is selected, we should move to the last item in the list and apply the selected class and focus. If none of these keys are detected we do nothing.
The ternary within the next branch of the outer conditional is very similar but this time we are looking at the last list item, but this time if the down or tab key is pressed, we move the selection to the first item in the list. The final condition again is very similar, but this time we just move the selection up or down depending on which key was pressed.
Catering for Mouse Interactions
Just because we’ve built keyboard navigation into our widget, it doesn’t necessarily mean that every visitor is going to make use of it, so for consistency we should move the selection around if the mouse pointer hovers over any of the list items. The code for this is very simple indeed:
//listen for mouseover $("#resultList").find("li").live("mouseover", function() { $(this).parent().children().removeClass("selected"); $(this).addClass("selected").children(":first").focus(); });
This is a simpler version of what we did with the keyboard event handlers; when the pointer hovers over a list item, we remove the selected class from all of list items and then add it back to whichever item was hovered, focusing the anchor element as we go.
Finally we can add a function that will close the result menu if any element outside of the menu is clicked. We do this by attaching a click handler to the body tag, and checking that the element that was clicked does not have a parent higher up in the DOM which is an ordered list with an id of resultList:
//add click handler to body $("body").click(function(e) { //close results if anything outside results is clicked if($(e.target).closest("ol").attr("id") != "resultList") { //remove menu ($("#resultList")) ? $("#results").fadeOut("fast", function() { $(this).remove(); $("#query").val(""); }) : null ; } });
Attaching the event listener to the body in this situation is useful because a click on absolutely any element on the page will bubble up to the body where we can capture and examine it.
Summary
We should now have a fully working widget which will allow us to search all of the content from a site that is contained within a database. The list of results will be both keyboard and mouse navigable and should appear as in the following screenshot:
Let’s recap what we’ve covered in this tutorial:
- Many web sites and blogs are powered by a database in which all of the page content is stored. We looked at an example MySQL data source and saw how we can easily populate a test database using a simple text file.
- We looked at how we can use PHP to retrieve the information from the database and search through it to look for occurrences of the search term. We looked at constructing a data structure, sorting the data, and converting it to an easily consumed JSON object.
- We then looked at how to process this object in the browser and update the DOM of the widget to reflect whether the search was successful or not. We at looked at some of the considerations required to enable keyboard navigation of the results, turning it into a menu.
Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this postPowered by
| http://code.tutsplus.com/articles/asynchronous-search-with-php-and-jquery-part-2-plus-tutorial--net-5052 | CC-MAIN-2015-35 | refinedweb | 8,419 | 57.61 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hey,
I try to synchronize incoming OSC with the GUI of Guido to be able to use both to modify my sketch parameters. Unfortunately there is no documentation yet and no example for doing this.
So for a start I simply wanted to figure out how to set values of a slider using the mouse. This is my attempt modifying the slider example of the library by adding a
set() method to the
Slider class:
/** * A slider * * .. works with JavaScript mode since Processing 2.0a5 */ import de.bezier.guido.*; Slider slider; void setup () { size(400, 400); // make the manager Interactive.make( this ); // create a slider slider = new Slider( 2, 2, width-4, 16 ); } void draw () { background( 0 ); fill( 255 - (slider.value * 255) ); ellipse( width/2, height/2, 150, 150 ); fill( slider.value * 255 ); ellipse( width/2, height/2, 70, 70 ); slider.set((float)mouseX); } public class Slider { float x, y, width, height; public float valueX = 0, value; Slider ( float xx, float yy, float ww, float hh ) { x = xx; y = yy; width = ww; height = hh; valueX = x; // register it Interactive.add( this ); } // called from manager void mouseDragged ( float mx, float my ) { valueX = mx - height/2; if ( valueX < x ) valueX = x; if ( valueX > x+width-height ) valueX = x+width-height; value = map( valueX, x, x+width-height, 0, 1 ); } void draw () { noStroke(); fill( 100 ); rect(x, y, width, height); fill( 120 ); rect( valueX, y, height, height ); } void set (float _value) { Interactive.set(this,"value",_value); } }
But I only get
Interactive.set() ... unable to find a field with that name in given target.
How can I fix this?
Thanks a lot!
Answers
I don't know well Guido, so I wonder: why do you use Interactive.set() instead of setting the value directly to the
valuevariable?
You might need to adjust the value of
valueXin this setter as well.
Thanks PhiLho!
Well, I hoped this was something the library would take care of for me and provide such functionality. At least ControlP5 does it.
If there is no super easy solution as I expected, I'll just go without a GUI in my sketch right now. In case I extend the code in that way later, I can post it here.
Well, again, isn't just
working for you? I am not sure what you try to achieve.
Perhaps you want
valueX = _value;actually, since
valueis overwritten / computed. | https://forum.processing.org/two/discussion/9422/how-to-set-values-of-a-gui-element-in-guido | CC-MAIN-2019-26 | refinedweb | 415 | 66.13 |
- Introducing the Vector
- Demonstrating Vectors
- Summary
Demonstrating Vectors
Now here's a demonstration of the vector type. But wait! If you read the previous section, you probably noticed that I mentioned that templates are not classes; they're cookie cutters for classes. So what is vector? It's really a template. That means that you have to create a class based on the vector template. And when you do, you specify the type of data that the vector will hold.
Suppose you want to create a vector that can hold 10 integers. Here's an example:
#include <iostream> #include <vector> using namespace std; int main() { vector<int> storage(10); storage[0] = 10; storage[1] = 20; storage[2] = 30; for (int i=0; i<storage.size(); i++) { cout << storage[i] << endl; } }
Here's the output:
10 20 30 0 0 0 0 0 0 0
In this code, I created an instance of the class vector<int>, which is based on the vector template. I called the instance storage, and specified that it would hold 10 items. Then I accessed the elements just as I would an array. But by itself, this example really isn't any better than an array.
Vectors are better than arrays, however, because you don't have to specify a size up front. But when you do so, you have to use a different method of putting items into the vector. Here's a revised version where I don't specify the size up front, and I use the push_back function, which appends an item to the vector:
#include <iostream> #include <vector> using namespace std; int main() { vector<int> storage; storage.push_back(10); storage.push_back(20); storage.push_back(30); for (int i=0; i<storage.size(); i++) { cout << storage[i] << endl; } }
As you append items to the vector, the size will change. Thus, the call to size() in this example will return 3. Here's the output:
10 20 30
Not bad! But it gets better. The usual way of moving through a vector (and other containers) is by using an iterator. People tend to be confused by iterators, but they're really not complicated. An iterator is simply a pointer to an item inside the vector. That's all. But iterators are smart, and you can move about the vector. Here's an example:
#include <iostream> #include <vector> using namespace std; int main() { vector<int> storage; storage.push_back(10); storage.push_back(20); storage.push_back(30); vector<int>::iterator iter = storage.begin(); while (iter != storage.end()) { cout << *iter << endl; iter++; } }
This example creates an iterator called iter, and starts it out pointing to the beginning of the vector. But look carefully at the iterator's type. The type is vector<int>::iterator. Each container class has a member type called iterator. The container class in this case is vector<int>, so the iterator's type name is vector<int>::iterator.
The begin() function returns an iterator pointing to the beginning of the vector. The end() function returns an iterator pointing to the end of the vector. So I start at the beginning and perform a while loop, testing whether the iterator is at the end of the vector. If not, I print out the iterator. But notice I have to dereference the pointer by saying *iter. The iterator is really a pointer, remember, and so I use *iter to get to the contents.
Do you see what's happening? The iterator points to an item in the vector. That's all! Then, to move to the next item in the vector, I simply use iter++.
Here's the output:
10 20 30 | http://www.informit.com/articles/article.aspx?p=102155&seqNum=2 | CC-MAIN-2018-05 | refinedweb | 606 | 66.64 |
Table of Contents
Squish 6.3 is a feature release which delivers new features to all editions of the product.
Furthermore, a lot of features and bugfixes were applied to individual Squish editions since Squish 6.2; see the following sections for a detailed list of all changes.
As a supplement to the main property-based object identification, elements can be located on the screen based on their visual appearance. This enables interactions with custom elements, controls from unknown toolkits, and controls from outside of the main application.
The Squish IDE now features the insertion of user interactions
like
mouseClick() and
tapObject() based on
images. The search is performed through a newly added
waitForImage function. The algorithm
employed is still quite simple, but future versions will provide more
sophisticated look-up options that can deal with color and resolution
differences. See How to Do Image-Based Testing (Section 5.24) for an
example automating a chess application.
Once the extension leaves its beta state it will require an additional license. At that point a time-limited free upgrade option will be available for all existing customers.
The new functions
test.compareTextFiles
and
test.compareXMLFiles compare the content of
two text and XML files, respectively. Instead of performing a plain,
byte-by-byte comparison, possible differences are reported for individual
lines and elements. Several configuration options exist to allow for
acceptable differences in content and formatting.
Significantly improved stability and resource usage of squishserver.
Fixed attaching to AUTs from squishserver running on Solaris.
Fixed an issue due to which executing tests using verification point files would not generate valid XML version 3.0 or 3.1 report files.
Fixed gem utility shipped with Ruby interpreter of Windows and macOS packages such that it's possible to install additional Ruby gems.
Fixed an issue causing object lookups to fail in case a
property value contains the substring
}.
Improved detection of invalid command line arguments for squishrunner and squishserver.
The
--retry parameter for squishrunner
now works as intended for BDD
scenarios.
New report generator versions:
xml3.2
and
json1.2. They include the retry count for a
retried test case resp. scenario.
The xml2jira (Section 7.4.10) utility now allows creating
and updating JIRA tickets which ask for user-defined
CustomFields
to be set.
The xml2jira (Section 7.4.10) utility now supports handling tickets in user-defined workflows involving custom ticket states and custom ticket state transitions.
Python Interpreter for PyDev is always automatically
configured based on the
PythonHome entry from the Squish
installations
SQUISH_DIR/etc/paths.ini
file.
Fixed a problem sometimes causing an error dialog to be shown while looking at tooltips in the feature editor.
Enforce the Test Suites view to be opened when the Squish IDE starts as it is crucial for using the Management Perspective.
Fixed a problem with activating the Scenarios tab in the Test Suites view when opening a test suite where the first test case is a BDD test.
Added option to disable generation of warnings for missing step implementations in BDD tests. See Edit > Preferences > Squish > BDD Support.
Support stepping out of a function to the caller (also known as 'Step Return').
Fixed a problem in the context menu of the test data editor causing all items there to be disabled.
Improved feature file editor performance when working with test suites that contain many feature files and scenarios.
Fixed a bug in the Squish IDE's test description editor causing it to show the descriptions in a single line instead of multiple ones.
When instrumenting Android applications, the Squish IDE now
ensures that the
jarsigner executable can be found
on the system before proceeding with instrumentation.
Support opening multiple testsuites through the commandline invocation of the Squish IDE by specifying the parent directory of the test suites.
Improve handling of
source
and
findFile in JavaScript tests allowing
code-completion and Go To Declaration to work with
sourced scripts.
Improve auto-completion for
ToplevelWindow,
BDD and
Screen modules in
JavaScript.
Save and Restore the selection states in the Test Suites and Global Scripts views between Squish IDE sessions.
Support showing testcases, files and folders of the suite in the System Explorer.
The Squish IDE window is not deactivated as intended when creating screenshots or visual verification points on macOS. This avoids potential issues with different focus indicators when creating and replaying tests.
The image viewer used for viewing image files is now configurable in the Preferences dialog.
Screenshots taken for test failures can now be opened using the internal image viewer of the Squish IDE.
Fixed an issue causing an error message to be shown when saving an object snapshot in the root directory of a Windows drive.
Improved the visual appearance of the Squish IDE when running on Windows 8 and newer.
Improved the error message when attempting to open a test data file that is neither in a test case nor in a test suite directory.
When viewing the results of visual verification points, the object hierarchy showing matched objects now works as intended.
The Run Scenario and Record Scenario in the context menu of the Gherkin editor are now only available if it's actually possible to run resp. record a scenario. They used to be enabled even if a scenario was currently being executed or recorded, and clicking them would then trigger an error.
Improved performance of Ruby-based BDD tests with a lot of step definitions.
Improved stability of XML module for JavaScript tests.
Improved stability of Tcl scripts when using the
source command to load external Tcl files with
syntax errors.
The
testData.put function now
allows copying empty files.
Made
waitForObjectItem
work correctly when the item text is empty.
It is now possible to invoke static member functions on null objects.
Using the
RemoteSystem Object (Section 6.3.19) in Ruby test
scripts no longer requires issuing an
include Squish
statement first.
Fixed JavaScript
Array.sort() for strings
with different lengths.
Fixed invoking
orientation method on Tcl
Screen Object (Section 6.3.14) when passing a screen index
value.
Fixed a bug in the
squishtest Python module
causing the
setTestResult() function to create too many
report generators; the function now throws an exception when called the
second time.
macOS only: Using the
squishtest Python
module no longer requires setting the
DYLD_LIBRARY_PATH
environment variable and make it work with the system Python on macOS
10.11 and macOS 10.12.
Fixed a bug in the
squishtest Python module
which caused the test report to not get written out completely at the end
of a test execution.
Removed unneeded
SquishRunner.py
convenience module to avoid confusion.
Fixed syntax for enum value recording in Perl to also work
when
use strict; is being used.
Fixed breakpoints in external source files in Perl.
The default timeout for
waitForObject and
similar script functions can now be configured via
testSettings.waitForObjectTimeout.
Improved memory usage of the JavaScript XML Object (Section 6.16.6).
The Python
import statement now works for test
suite resources as an alternative to Squish's own
source()
function.
Running a BDD test with a filter,
skipped scenarios (and examples of a scenario outline) appear with a
Skipped test result now (in the past, nothing was
reported at all).
Running a BDD test with a filter and
the filter causes no scenarios to be executed at all, the
OnFeatureStart and
OnFeatureEnd
hooks are no longer executed.
It is now possible to specify textual descriptions for Examples sections of scenario outlines, much like in Feature or Scenario sections.
It is now possible to specify more than one Examples section per scenario outline.
Added support Qt 5.8.0 and 5.9.0.
Added support for QVector2D, QVector3D and QVector4D types.
Added support for displaying QPalette properties including sub-properties in the IDE.
Fixed a possible crash of Squish for Qt on Android packages using Qt 5.7.
Fixed
nativeMouseClick replay
with non-primary mouse button.
Optimized object lookups based on the
text property
Optimized accessing
QModelIndex objects in
Qt models.
Fixed set of 'unmatched properties' reported for failing object lookups when running tests on Windows.
Fixed object highlighter for Squish for Qt running on iOS in landscape mode.
Fixed crash in keyboard input recording for QtQuick when the mouse was moved at the same time.
Fixed object lookup for
QObject subclasses
that expose a custom property called
name.
Fixed object lookup for
QObject subclasses
where properties mentioned in the Squish object name were skipped under
certain circumstances, causing unrelated objects to be returned instead.
If needed, the old behavior can be restored by changing the
QObjectLookupSkipMissingProperties value in the
SQUISH_DIR/etc/qtwrapper.ini configuration file.
Added support for desktop screenshots on platforms where screen grabbing fails. As an approximation, the contents of the currently maximized or fullscreen toplevel window will be grabbed if possible.
Support for
QWebEngineView objects is only
loaded into AUT processes that already have the
QtWebEngineWidgets module loaded in order to avoid
AUT freezes on startup.
Fixed a crash when invoking
QVariant::toMap() in a test script.
Improved dependency footprint on Windows by respecting
the
etc\winwrapper.ini configuration file to decide
whether to load code related to interacting with COM
objects.
Added
autoRaise,
defaultAction
and
menu as readonly properties to QToolButton.
Fixed fetching children from
QQuickWidget,
also fixing support of
QtQuickControls 2.x Popup overlays
inside this view type.
Added support for testing PyQt applications using Qt 5 on macOS.
Added
mouseWheel
function.
Fixed
waitForObjectItem such that it
correctly waits for a menu item to be ready.
Fixed possible hookup problems in SWT
applications when using a modified SWT
.jar file.
Accessing
JTable items now correctly scrolls
to the item.
startjavaaut now waits for the
AUT to be up and running before opening its listening
port.
Fixed a bug which caused hooking into JavaFx WebView controls to only work shortly after a page load.
Improved stability of traversing HTML objects contained in a JavaFx WebView control.
Added full support for SWT browser controls, enabling dedicated recognition of HTML objects on all platforms and with all browser engines.
Added HiDPI support for SWT version 4.6.
Fixed a potential issue with hooking into Java applications running on macOS.
Added support for testing with the Microsoft Edge browser.
Improve error handling when a user starts a testcase for
Firefox/Mozilla or Chrome without having the extension
installed or having an outdated extension installed. There will now be
checks done in the Squish IDE before starting the test and as part of starting
the
webhook helper process before starting the
browser. If the check fails an error is generated and the browser start
is aborted.
Improved browser extension installation procedure for Firefox/Mozilla and Chrome. The Squish IDE will trigger this when recording/running a testcase (or launching the browser) and no working extension could be detected. The extension installation is not part of the Squish installation procedure anymore.
Text input in web applications is being recorded into a
typeText command now instead of a
setText command.
Changed behavior of
typeText to only click into the beginning of the
field if the field has no focus, so that subsequent
typeText commands on the same field will append
to the field instead of prepending to it.
Improved execution speed of the
typeText function by removing a fixed three
second delay and instead relying on the focus state of the object to be
typed into.
Added a
selectAll
method to
HTML_TextBase objects that selects all the text in
the field.
Added
chooseFile function
to interact with file dialogs inside the browser.
Fixed set of 'unmatched properties' reported for failing object lookups when running tests on Windows.
Support automation of Chromium-based desktop applications (built for example using CEF, Electron, nw.js).
Support filtering of the
id property
from names generated by Squish for Web. See
FilterIdPropertyFromGeneratedNames in
SQUISH_DIR/etc/webwrapper.ini for more
information.
Make
nativeMouseClick fail
when the bounding rectangle of the object to click on has no width or
height as this usually indicates that the screen coordinates calculated
are wrong as well and would just cause a click in the top-left
corner.
Make
nativeMouseClick verify
that the final coordinates (element position + provided relative click
parameters) are within the bounding rectangle of the viewport of the
browser and fail with an error if the coordinates are outside. This is
another measure by Squish to avoid clicking somewhere on the desktop
possibly bringing an automation system into an unusable
state.
The Squish extension for Google Chrome is now being installed from the Chrome Store to avoid developer-mode warning popups.
Fixed a crash happening sometimes with Chrome when the last tab is being closed.
Fixed a crash happening with Microsoft Internet Explorer when the test script closes a tab and immediately starts looking for another browser tab name.
Fixed a problem causing
waitForContextExists to always wait for the
timeout (20 seconds by default).
Improved error reporting when using the function
ToplevelWindow.focused of the
ToplevelWindow Object (Section 6.3.15)
Fixed a regression that could lead to Squish not being able to access any elements in Microsoft Internet Explorer 11 after a link click navigated to a new page.
Fixed a crash happening when trying to access the title of a browser tab object in Microsoft Internet Explorer when a PDF is shown via an embedded viewer plugin in that browser tab.
Disallow usage of Firefox/Mozilla 57 and newer with Squish 6.3 as the extension being used by Squish is not compatible with Firefox/Mozilla 57.
nativeMouseClick and
typeText will correctly signal an error now when
activating the tab corresponding to the HTML object to
interact with fails.
A WPF example program called
AddressbookWPF is now included, showcasing the
support for WPF controls such as
DataGrid.
Fixed issue causing initial actions performed on .NET applications to not get recorded sometimes.
Fixed potential resource leak when replaying tests on .NET applications.
Handle vanishing objects more gracefully when replaying tests on .NET applications.
Fixed problem causing accesses to the
text property of MFC tree view
items to abort test execution in some cases.
Made startwinaut (Section 7.4.7.4) print status (and
error) messages when using the
--port switch to simplify
diagnosing issues related to attaching to applications.
Applications launched via Squish for Windows will no longer show the standard WER (Windows Error Reporting) dialogs to avoid blocking test execution.
Script handlers for
Crash events
installed using the
installEventHandler
function will now get invoked as expected.
Fixed error message being shown when picking (or
recording clicks on) empty Infragistics
UltraGrid
controls.
Exposed new
text property on
Infragistics menu items and toolbar buttons for
consistency.
Added
tapObject
function which is the same as
clickObject and
doubleTap function which is the same as
doubleClick. For consistency with
other Squish editions, the
tapObject function is recorded instead of
the
clickObject function. The
clickObject and
doubleClick functions are kept for
compatiblity with existing test scripts.
Fixed hanging and crashing during playback on a
WKWebView.
Desktop screenshots are now taken in Retina resolution on
Retina devices. This affects the
saveDesktopScreenshot function and the testSettings.logScreenshotOnFail setting. The
behavior of screenshots in verification points is not
changed.
Added
WebView.evalJS
method to the
WebView object type.
Better native Java method exception description than
InvocationTargetException.
Fixed regression from Squish 6.2 that prevented <IP>:<port> network device strings from being recognized.
Fixed instrumentation of APKs which use the latest Android SDK.
Improved recording of text input by no longer considering
text input on
XWalkViewBridge controls (or child controls
thereof).
Support for building squishserver with Visual Studio 6 has been dropped, customers building Squish/Qt from sources with that compiler should use the Quick Install (Section 3.1.2.1) steps.
Fixed C++11 detection when building with Qt >= 5.7.0.
Fixed building Squish on macOS against a Qt build which uses a library name infix.
Fixed
qtbuiltinhook.pri for
including the Squish for Qt builtin-hook in
qmake projects to work with recent
qmake and Qt
Creator
versions.
Added experimental support for building squishidl (Section 7.4.5) with qmake.
Improved performance for file copying during build,
most notably when using
build install.
Added support for building squishrunner and Squish IDE utilities with Qt 5.
Fixed failed assertion in
doc/book/Buildsub when building Squish from
sources with separate source and build directories but not enabling the
documentation in the build.
Fixed several data files being marked as executable in source packages.
Overhauled the Squish for Web tutorial.
The documentation of the
attachToApplication is now included in all Squish
packages. The function's documentation was also extended to document that
the timeout can be changed without specifying a host or port to connect
to.
Corrected the Squish for Java tutorial such that it no
longer claims that Squish needs to know the path to the
swt.jar file.
For a list of noteworthy issues which were found after the release of Squish 6.3.0, please see the Known Issues page on the froglogic Knowledge Base. | https://doc.froglogic.com/squish/6.3/rel-630.html | CC-MAIN-2019-18 | refinedweb | 2,842 | 57.16 |
Proper structure of MicroPython package
Recently I realized that the MicroPython code which is used to build firmware for our products got too big and hardly readable - almost 2000 LOC. Current structure is very simple - there is single module file which defines main application class. The class is then imported from this module in _main.py, then instantiated and consequently run() method is called. On top of that there are several 3rd party modules located in /lib directory (umqtt, urequests, ...) which are imported as well. All modules are frozen in custom firmware build.
We would like to re-write the code to become better structured.
The structure should allow flexible building of custom firmware which may be different for each customer with build configuration described in a single file (Python script, JSON, ...)
The package should contain directories for modules with similar functionality - low-level sensor/IC drivers, IoT platform client implementations, etc.
Several attributes and functions should be available for all modules - e.g. current configuration (Python dictionary), logging function, etc.
What are the best practices when designing and building such MicroPython package?
What is the best way to share data between modules? E.g. using global statement/namespace, passing object names as arguments, etc.
we are conceiving a medium-sized MicroPython codebase [1] called Terkin for MicroPython [1b] for the Bee Observer project, which might well resonate with your questions and requirements.
So, we want to humbly point you to the
terkinfolder [2] within that repository which might spark some ideas for your development process on how to structure a codebase appropriately by using object oriented programming.
Similar to what you are describing, the datalogger program also uses quite a bunch of 3rd party modules which will get populated into a
dist-packagesfolder through
make setup[3] after cloning the repository. Thus, these packages are not part of the repository itself but will be acquired at development/build time.
Maybe this helps on giving you some ideas around what you have been asking for.
With kind regards,
Andreas.
P.S.: Just recently, our datalogger program started to spark interest with others and we got support from @poesel to integrate support for BLE [4,5] (thanks!) as well as LoRa/TTN telemetry [6] from other contributors. Saying this, we will be happy to accept further contributions from the community as we are aiming to make this more generic beyond its original scope of beehive monitoring.
[1]
[1b]
[2]
[3]
[4]
[5]
[6] | https://forum.pycom.io/topic/3860/proper-structure-of-micropython-package/1 | CC-MAIN-2020-29 | refinedweb | 412 | 55.34 |
Guest session processes are not confined in 16.10 and newer releases
Bug Description
Processes launched under a lightdm guest session are not confined by the /usr/lib/
The simple test case is to log into a guest session, launch a terminal with ctrl-alt-t, and run the following command:
$ cat /proc/self/
Expected output, as seen in Ubuntu 16.04 LTS, is:
/usr/lib/
Running the command inside of an Ubuntu 16.10 and newer guest session results in:
unconfined
Related branches
The pstree output helps shed some light on this bug. It looks like the portions of the guest session that are spawned upstart are properly confined. The portions spawned by systemd are not confined. I'm attaching the `pstree -Z guest-XXXXXX` output from a running yakkety guest session.
Here's the pstree output for a zesty system. It looks like more things have been moved over to systemd's control and, therefore, more things are unconfined.
I should mention that the above pstree outputs require some changes to the pstree code to get the AppArmor label included in the output when the -Z option is specified. I've pushed a work-in-progress quality git branch of psmisc to https:/
Ow. Unfortunately I don't have any information on how to fix this since most of the work on guest sessions and systemd was done by Martin Pitt.
@pitti - I know you don't have any responsibility here but wondering if you have any advice on what direction to solve this?
The proper way to do this would be to use pam_apparmor, similar to how selinux is doing this, through systemd --user
/etc/
However currently this would require updating to a new version of pam_apparmor OR confining systemd to define hats and potentially other setup, which is probably something we are not ready to do just yet.
Note: ideally we want to set things up to use a policy namespace stack so that snaps can work from the guest session.
I spent some time looking into pam_apparmor and understanding how could be used. It seems like it would be extremely risky to introduce in a security update and I'm not sure if it even supports everything that would be needed. IIUC, it requires us to confine all login applications that use PAM and it isn't clear if we can selectively confine only the guest users and leave all other users unconfined. At this point, I'm not comfortable/
I also did a bit of experimenting with adding "AppArmorProfil
Modifying the user@.service file also isn't ideal because I don't see a way to only apply the AppArmor profile to guest user sessions while leaving regular user sessions unconfined.
I don't see a good solution to this problem.
@tyhicks you are correct that pam_apparmor is NOT a good solution currently. I will restate, it requires either
- a new version of pam_apparmor
or
- confining systemd and setting up hats for the guest session user (which currently means the user name can not have randomization).
pam_apparmor does NOT require we confine all pam applications, just those that are using it.
A minimal patch to better support guest sessions in pam_apparmor (using change_onexec instead of change_hat) could be done (again basically a new version of pam_apparmor), and might be the best solution. Or if you want I guess we could look at landing full support but that is larger and would involve the parser, etc.
We need to either find a solution to this issue, or push an update to disable guest sessions.
FWIW - Desktop are fine with disabling the guest session while we work out the systemd stuff.
Patch to Yakkety that disables guest support by default.
Patch to Zesty that disables guest support by default.
These patches ship LightDM with guest support disabled by default. You can re-enable it by putting a config file with higher priority containing:
[Seat:*]
allow-guest=true
e.g. put this in:
/etc/lightdm/
/etc/lightdm/
/usr/share/
@robert-ancell thanks for the debdiffs! Is the addition of debian/
> I also did a bit of experimenting with adding "AppArmorProfil
That was my first thought, too. This can be be done like this:
1) lightdm creates the temporary guest user/id.
2) lightdm then creates /run/systemd/
[Service]
AppArmorProfile
so that it applies *only* to the guest UID.
3) lightdm calls org.freedesktop
4) lightdm then goes on with starting the session
5) after the session finishes, clean up the above drop-in.
I tested that with a permanent user (as I don't want to change the lightdm code for experimentation), and I now get violations like
audit: type=1400 audit(149444866
audit: type=1400 audit(149444866
audit: type=1400 audit(149444866
As Tyler mentioned, the profile needs updating for a systemd user session, as that now confines each service into its own cgroup. Some stuff like reading cap_last_cap (the second violation) can probably just be quiesced, and maybe it can even disallow journal access, but access to the cgroup fs for and beneath user-$ID.slice is required if you want to use a systemd session for the guest session at all. The normal ACLs should already provide sufficient isolation there.
There is an alternative for SRUs: don't use the systemd user session but the upstart one. This requires adding an upstart dependency/
@Tyler - yes, the 99 should have been removed...
Fixed zesty debdiff
This is CVE-2017-8900.
This bug was fixed in the package lightdm - 1.19.5-0ubuntu1.2
---------------
lightdm (1.19.5-0ubuntu1.2) yakkety-security; urgency=medium
* SECURITY UPDATE: Guest session not confined (LP: #1663157)
- debian/
- debian/
- Disable guest sessions by default, this can be overridden by custom
- CVE-2017-8900
-- Robert Ancell <email address hidden> Tue, 09 May 2017 09:32:16 +1200
This bug was fixed in the package lightdm - 1.22.0-0ubuntu2.1
---------------
lightdm (1.22.0-0ubuntu2.1) zesty-security; urgency=medium
* SECURITY UPDATE: Guest session not confined (LP: #1663157)
- debian/
- debian/
- Disable guest sessions by default, this can be overridden by custom
- CVE-2017-8900
-- Robert Ancell <email address hidden> Tue, 09 May 2017 09:32:16 +1200
I'm making this bug public now that we have security updates published which disable the guest session. My hope is that we can re-enable it after the changes suggested by pitti can be investigated/
If you have a use case which requires the guest session, you can manually re-enable it by writing the following contents to /etc/lightdm/
# Manually enable guest sessions despite them not being confined
# IMPORTANT: Makes the system vulnerable to CVE-2017-8900
# https:/
[Seat:*]
allow-guest=true
Balint, could you follow through on this bug? Martin has provided some good general guidance already about what's required to re-enable secure guest sessions in artful.
After coming back to this bug, I noticed that Robert was not subscribed and couldn't see the bug. He's now subscribed. | https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1663157 | CC-MAIN-2017-22 | refinedweb | 1,170 | 59.64 |
Created attachment 285545 [details]
patch to ebuild
If Python3 is the default interpreter singularity-0.30c emerges OK but does not run:
$ singularity
Traceback (most recent call last):
File "singularity.py", line 1, in <module>
import code.singularity
File "/usr/share/games/singularity/code/singularity.py", line 54
except Exception, reason:
^
SyntaxError: invalid syntax
Attached patch to the ebuild fixes this for me.
(OK only the games_make_wrapper change is key to this, it also sets a Python dependency and byte-compiles the code).
Created attachment 285547 [details, diff]
patch to ebuild
Created attachment 285555 [details, diff]
patch to ebuild
grief 2 years ago this has been sitting
python team, please take it up and change to current python version
2 months, not years.
The patch looks mostly correct.
PYTHON_DEPEND="2" instead of PYTHON_DEPEND="2:2.4" is sufficient since 2.4 is the oldest version available in the tree.
python_pkg_setup() should be called after python_set_active_version().
also, games should remain the last item on the inherit line.
So, this bug can be closed then ? Singularity development seems to be inactive since 2010. With the attached patch, this issue is fixed, right ?
Pushed as:
> *singularity-0.30c-r1 (29 Jun 2013)
>
> 29 Jun 2013; Sergei Trofimovich <slyfox@gentoo.org>
> +singularity-0.30c-r1.ebuild, singularity-0.30c.ebuild:
> Ported to python-single-r1 (used funnyboat-1.5-r1 as an example). Added
> workaround to stable ebuild to run python2 (bug #381809 by Chris Mayo).
Hope I broke less, than fixed.
Thank guys! | https://bugs.gentoo.org/show_bug.cgi?id=381809 | CC-MAIN-2021-39 | refinedweb | 250 | 61.43 |
Using DirectDraw in Document/View Architecture
Preface
I know there are a bunch of classes and related articles written in this DirectX section of. So this article is a simple tutorial for people who just started programming in DirectDraw. I hope you guys will enjoy this tutorial.
I was working on some design for Galloping Ghost Productions when I discovered there is a way to integrate a DirectDraw interface into Document/View architecture. People might ask, why add an DirectDraw interface into the view object when a full-screen DirectDraw application can be done? Yes, doing a windowed DirectDraw application is not as cool as doing a full-screen hardcore DirectDraw application. However, you have some advantages when designing a windowed application:
- It is easier for debugging.
- It can create an interface for using DirectShow interfaces.
- It can be used to design windowed games using DirectDraw and Direct3D. KOEI's "Romance of Three Kingdoms" used windowed applications.
- The concept of this tutorial can integrate into other concepts in. For example, there is a small class that can be used for designing screen savers. You can integrate the stuff in this tutorial into that to create DirectDraw-based screen savers.
- ......
Anyway, let's start the tutorial.
Setup
We all know the CView class is a derivative of the CWnd class. Thus it has all the physical attributes of a window. So it is really not hard at all to add a DirectDraw interface into the view class derived from CView. I find it is much easier to program than doing full-screen DirectDraw. I used to spend hours correcting problems with SetDisplayMode() or SetCooperationLevel(), or problems with off-screen surfaces. All these problems are due to the problem of debugging. It is really hard to debug under full-screen mode. This problem can be eliminated by using windowed mode DirectDraw and with the debugging tools provided by MFC classes. I am wasting time here; sorry.
- First, you have to make some slight modifications to Visual C++ 6.0. I am certain that when you installed DirectX 7.0 SDK or above, the environmental variables in Visual C++ are automatically set up correctly. If, after you download my code and compile it and there are errors during linking or compiling, you need to do some tuning:
- Go to Tools, Options.
- Select the Directories tab.
- Select "Include files" in the "Show directories for" list box.
- Add the INCLUDE directory of DirectX on top of all directories. For example, if you installed DirectX 7.0 in C:\DXSDK, there is a directory inside called "include", which is C:\DXSDK\INCLUDE; put this directory on top of all other directories shown in the list box. See the following figure:
- Do the same thing with the LIB directory by selecting "Library files" in "Show directories for"; then find the LIB directory in your DirectX SDK directory, For example, if you installed DirectX 7.0 in C:\DXSDK, then there is a directory inside called "include", which is C:\DXSDK\LIB, put this directory on top of all other directories shown in the list box. See the following figure:
Then, click OK.
- If there still are problems in linking, add some more things to the project settings:
- Go to Project, Settings.
- Select the Link tab.
- In "Object/library modules", add "ddraw.lib" and "dxguid.lib". See the following figure:
Then click OK.
- Now you can download my code, compile it, and run.
My Code
I wrote this CDDrawSystem class (located in DDrawSystem.h and DDrawSystem.cpp files) to display graphics using DirectDraw. Basically it has:
- Constructor, Destructor.
- Init() function that creates the IDirectDraw7 interface, a primary surface buffer, a back surface buffer, and a clipper for the primary surface buffer. It is self explanatory. I grabbed the code from DDUTIL.H and DDUTIL.CPP, which are from Microsoft, along with the rest of the DirectX SDK.
- Terminate() function that terminates all objects using the COM method release().
- Clear() function that clears the primary surface and the back surface buffer by blit color 0 (black) to these two surfaces.
- Display() function will blit the back surface buffer to the primary surface and thus the graphic will be displayed. The problem with Display is that you cannot blit using absolute coordinates. You have to know exactly where your view (which is the client window) is, and blit the graphics to this view area. So this is how I designed the function:
- In addition to these functions, you can add new functions such as load bitmap files, bit blit bitmaps from off-screen surfaces, and draw text, primitive geometric shapes, and lock surface and do crazy stuff.
void CDDrawSystem::Display() { HRESULT hRet; RECT rt; POINT p = {0, 0}; ClientToScreen(hWnd, &p); // get the client area on // the desktop by using this line. rt.left = 0 + p.x; rt.top = 0 + p.y; rt.right = 800 + p.x; rt.bottom = 600 + p.y; // to ensure the drawing is complete, we use loops to continue // the drawing until the blitting returns DD_OK or it is not // DDERR_WASSTILLDRAW; } }
I wrote this TestDraw() function just to show you how to do primitive drawings on back screen surfaces and blit them to the primary surface. It will display "This is a stinky App" at coordinate (20, 20). Then, if you click anywhere in the client area, it will draw a big white circular dot (about 50 pixels in radius).
Now, the Complete Code
// DDrawSystem.h: interface for the CDDrawSystem class. // //////////////////////////////////////////////////////////////// #if !defined(AFX_DDRAWSYSTEM_H__1E152EB4_ED1D_4079_BDD4_773383DD98C8 __INCLUDED_) #define AFX_DDRAWSYSTEM_H__1E152EB4_ED1D_4079_BDD4_773383DD98C8 __INCLUDED_ #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 #include <ddraw.h> #define _CHARACTORBUILDER_ #include "../GameLib/Image.h" class CDDrawSystem { public: CDDrawSystem(); virtual ~CDDrawSystem(); BOOL Init(HWND hWnd); void Terminate(); void Clear(); void TestDraw(int x, int y); void Display(); protected: LPDIRECTDRAW7 m_pDD; LPDIRECTDRAWSURFACE7 m_pddsFrontBuffer; LPDIRECTDRAWSURFACE7 m_pddsStoreBuffer; LPDIRECTDRAWCLIPPER pcClipper; HWND hWnd; }; #endif //!defined(AFX_DDRAWSYSTEM_H__1E152EB4_ED1D_4079_BDD4_ 773383DD98C8__INCLUDED_) // DDrawSystem.cpp: implementation of the CDDrawSystem class. // //////////////////////////////////////////////////////////////// #include "stdafx.h" #include "DDrawSystem.h" #ifdef _DEBUG #undef THIS_FILE static char THIS_FILE[]=__FILE__; #define new DEBUG_NEW #endif //////////////////////////////////////////////////////////////// // Construction/Destruction //////////////////////////////////////////////////////////////// CDDrawSystem::CDDrawSystem() { m_pDD = NULL; m_pddsFrontBuffer = NULL; m_pddsStoreBuffer = NULL; pcClipper = NULL; } CDDrawSystem::~CDDrawSystem() { Terminate(); } // old DirectDraw Initialization stuff. // Set a window mode DirectDraw Display. BOOL CDDrawSystem::Init(HWND hWnd) { HRESULT hRet; this->hWnd = hWnd; hRet = DirectDrawCreateEx(NULL, (VOID**)&m_pDD, IID_IDirectDraw7, NULL); if(hRet != DD_OK) { AfxMessageBox("Failed to create directdraw object."); return FALSE; } hRet = m_pDD->SetCooperativeLevel(hWnd, DDSCL_NORMAL); if(hRet != DD_OK) { AfxMessageBox("Failed to set directdraw display behavior."); return FALSE; } HRESULT hr; DDSURFACEDESC2 ddsd; ZeroMemory( &ddsd, sizeof( ddsd ) ); ddsd.dwSize = sizeof( ddsd ); ddsd.dwFlags = DDSD_CAPS; ddsd.ddsCaps.dwCaps = DDSCAPS_PRIMARYSURFACE; if(FAILED(hr = m_pDD->CreateSurface(&ddsd, &m_pddsFrontBuffer, NULL))) { AfxMessageBox("Failed to create primary surface."); return FALSE; } // Create the backbuffer surface ddsd.dwFlags = DDSD_CAPS | DDSD_WIDTH | DDSD_HEIGHT; ddsd.ddsCaps.dwCaps = DDSCAPS_OFFSCREENPLAIN | DDSCAPS_3DDEVICE; ddsd.dwWidth = 800; ddsd.dwHeight = 600; if(FAILED(hr = m_pDD->CreateSurface(&ddsd, &m_pddsStoreBuffer, NULL))) { AfxMessageBox("Failed to create back buffer surface."); return FALSE; } if(FAILED(hr = m_pDD->CreateClipper(0, &pcClipper, NULL))) { AfxMessageBox("Failed to create clipper."); return FALSE; } if(FAILED(hr = pcClipper->SetHWnd(0, hWnd))) { pcClipper->Release(); AfxMessageBox("Failed to create primary surface."); return FALSE; }
if(FAILED(hr = m_pddsFrontBuffer->SetClipper(pcClipper))) { pcClipper->Release(); AfxMessageBox("Failed to create primary surface."); return FALSE; } return TRUE; } // make sure all things are terminated and set to NULL // when application ends. void CDDrawSystem::Terminate() { if (m_pDD != NULL) { if (m_pddsFrontBuffer != NULL) { if (m_pddsStoreBuffer != NULL) { m_pddsStoreBuffer->Release(); m_pddsStoreBuffer = NULL; } if (pcClipper != NULL) { pcClipper->Release(); pcClipper = NULL; } m_pddsFrontBuffer->Release(); m_pddsFrontBuffer = NULL; } m_pDD->Release(); m_pDD = NULL; } } // clear both off csreen buffer and primary buffer. void CDDrawSystem::Clear() { HRESULT hRet; DDBLTFX fx; fx.dwSize = sizeof(fx); fx.dwFillColor = 0x000000; while (1) { hRet = m_pddsFrontBuffer->Blt(NULL, NULL, NULL, DDBLT_COLORFILL, &fx); if (hRet == DD_OK) break; else if (hRet == DDERR_SURFACELOST) { m_pddsFrontBuffer->Restore(); } else if (hRet != DDERR_WASSTILLDRAWING) break; } while (1) { hRet = m_pddsStoreBuffer->Blt(NULL, NULL, NULL, DDBLT_COLORFILL, &fx); if (hRet == DD_OK) break; else if (hRet == DDERR_SURFACELOST) { m_pddsStoreBuffer->Restore(); } else if (hRet != DDERR_WASSTILLDRAWING) break; } } // a test: // The conclusion is: Under no circumstance, draw directly to // primary Surface! // It doesn't work that way. // ... // ... // This is just a simple test function. It has shit use in this // project. void CDDrawSystem::TestDraw(int x, int y) { HRESULT hRet; HDC dc; hRet = m_pddsStoreBuffer->GetDC(&dc); if (hRet != DD_OK) return; POINT p = {0 + x, 0 + y}; ClientToScreen(hWnd, &p); SetTextColor(dc, RGB(255, 0, 0)); TextOut(dc, 20, 20, "This is a stinky App", lstrlen("This is a stinky App")); Ellipse(dc, x-50, y-50, x+50,y+50); m_pddsStoreBuffer->ReleaseDC(dc); } // Load images from offscteen buffer to primary buffer // and for display. void CDDrawSystem::Display() { HRESULT hRet; RECT rt; POINT p = {0, 0}; ClientToScreen(hWnd, &p); rt.left = 0 + p.x; rt.top = 0 + p.y; rt.right = 800 + p.x; rt.bottom = 600 + p; } }
Learn Game Programming using DirectX9Posted by lp_ on 02/01/2007 03:02pm
Learn Game Programming using DirectX9 visit:
stuck teenPosted by rapheal on 05/06/2005 02:32pm
how can i make a game wit direct x sdk, and win 2005 visual basic c/c++Reply
Please help me on Directshow and MFCPosted by nvnoi76 on 12/01/2004 09:50
Where can I find "/GameLib/Image.h"Posted by svahora on 10/18/2004 08:13am
I am getting compilation error because of above files. fatal error C1083: Cannot open include file: '../GameLib/Image.h': No such file or directory
Don't botherPosted by petruza on 01/31/2006 01:54pm
The file isn't needed at all, just delete the line: #include "../GameLib/Image.h"Reply
Thanks to JRSPosted by suyu on 01/12/2006 10:58pm
I encountered the same problem,but when I followed your suggestion,my computer compiled succeddfully.Thanks a lot!Reply
Where can I find "/GameLib/Image.h"Posted by JRS on 11/09/2004 02:50pm
The program does not work if the user changes the resolutionPosted by chris109 on 04/02/2004 08:29am
How can I avoid the call to OnDraw procedure of the view class?Posted by Legacy on 12/01/2003 12:00am
Originally posted by: Deutsch
I've tried to create a simple animation on the basis of this
tutorial. The problem is, that the function OnDraw in the view class must be called to renew the screen, and because of this the screen is blinking.
How can I avoid the call to OnDraw procedure of the view class?Posted by JRS on 11/09/2004 03:11pm
Code will not work in some casesPosted by Legacy on 09/02/2003 12:00am
Originally posted by: Ren�
Nice code, however I can assure you it will not work in anReply.
where can I find ddutil.h and resource.hPosted by Legacy on 05/31/2003 12:00am
Originally posted by: anita
haiReply
can you tell me where can I foind ddutil.h and resource.h
thank you
Works also for dialog-based appsPosted by Legacy on 05/11/2003 12:00am
Originally posted by: Zygoman
Very good stuff !!!! I used it for a dialog-based application, and there was no problem at all !!!!! Thanks a lot, save a lot amount of time :-)Reply
Success!!Posted by Legacy on 10/19/2002 12:00am
Originally posted by: Tyrone Deane
Thanks for the tip about how to include the DirectX lib and include directories. My code now complies. Yippee!!!!Reply | https://www.codeguru.com/cpp/g-m/directx/directdraw/article.php/c4361/Using-DirectDraw-in-DocumentView-Architecture.htm | CC-MAIN-2018-26 | refinedweb | 1,880 | 50.12 |
How to use custom functions with
:cdo
Vim’s
:cdo command lets you run an Ex command in each entry in the quickfix list.
This is useful for large-scale refactoring work.
One way to leverage this is to write a macro to perform the required update then
run it with
:cdo:
:cdo normal @q
Macros are powerful and many update operations can be done this way. However, sometimes the necessary update is too complex for a macro. This happens when there are several “categories” of update required and conditional logic is required to determine the appropriate operation.
For such circumstances you can write a custom Vim function and call that for each quickfix entry.
For example, I used this technique to factor out around 1,400
F841
flake8 violations
from a project today.
For
flake8 work, the quickfix list can be populated by
setting
makeprg=flake8 and running
:make.
As there were several distinct categories of violation that required a separate
update operation to resolve, I created a
FixF841Error function that
inspected the line in question to determine the appropriate remedy. Something
like this:
function! FixF841Error() " Example the line of the error to determine what fix is required. let line = getline('.') if line =~ 'as e:' " Handle scenario of unused exception variable ... elseif line =~ '^\s\+\w\+ = factory' " Handle scenario of unused test factory variable ... else echom "Unable to fix" endif endfunction
After
source-ing the function I ran:
:cdo call FixF841Error()
to resolve the majority of the errors.
Would have taken all day without this. | https://til.codeinthehole.com/posts/how-to-use-custom-functions-with-cdo/ | CC-MAIN-2021-10 | refinedweb | 255 | 55.64 |
State management in Flutter apps using Provider
Hello everyone, I hope you all are doing great. Today I am going to talk about State Management in Flutter apps. So first at all let’s find out what exactly state means…
What is State in Flutter?
“State is information that can be read synchronously when the widget is built and might change during the lifetime of the widget. It is the responsibility of the widget implementer to ensure that the State is promptly notified when such state changes, using State.setState.” -Flutter official documentation
Okay let’s figure it by our self. If you are a react or react native developer you have probably heard the word about state. Simply state is a variable or a value that can be dynamically change in run time. For an example let’s assume that you have created a flutter app to buy shopping items. So you probably have a cart in your system. For simplicity let’s assume that cart is list of Strings.
So where do you gonna put this state? Probably inside a screen call cart that has a state-full widget right?
This is the normal way that you going to do. It’s okay to do that way. But let’s figure out the problems that you are going to face in this process. First take a look at the following picture.
As you can see you have your screen or the widget as MyCart and inside that you have it’s state. You can change the state inside that widget. Or if you have any other child widgets for MyCart, in that case also you can access/change the state inside that child components as well. But what about the MyAppBar widget or MyLoginScreen????
There is no way you can access/change your cart state inside in those widgets. So how are we going to solve this problem? The answer is pretty simple, using a state management.
How state management system/tool is going to help with this?
A state management tool will provide a way to provide your state into several widgets by lifting up into the top level in your widget tree.
As you can see after using state management tool, your state is available to all of your widgets/screens.
What are the famous state management tool available for Flutter?
- Provider
- InheritedWidget & InheritedModel
- SetState
- Redux
- Fish-Redux
- BLoC/Rx
- Flutter Commands
- GetIt
- MobX
- Riverpod
- GetX
That’s a lot, So what is the recommended way to handle state in flutter?
- Provider is the recommended state management tool that is recommended by the flutter team.
So let’s create a simple application that can increase a value and manage it’s state using Provider.
- Create a new flutter application
flutter create statedemo
2. Navigate into project folder
cd statedemo
3. Add the Provider dependency to the project
flutter pub add provider
4. Create a new folder name “Providers” inside lib folder
mkdir Providers
5. Inside that Provider folder create dart file name “counter_provider.dart”
touch counter_provider.dart #linux
echo -> counter_provider.cart #windows
6. In “counter_provider.dart” create a class name Counter with ChangeNotifier
//The with keyword indicates the use of a "mixin"//A mixin refers to the ability to add the capabilities of another //class or classes to your own class, without inheriting from those //classes.import 'package:flutter/material.dart';class Counter with ChangeNotifier { int _count = 0; //to access to the state
int get count => _count; //to increment the state
void increment() {
_count++;
notifyListeners();
}}
Now we successfully separate our state from the widget. Now let’s use this in our application.
7. Let’s provide our state to root app widget so any widget can use our state.
7.a). import required files
import 'package:flutter/material.dart';
import 'package:{PROJECT_NAME}/providers/counter_provider.dart';
import 'package:provider/provider.dart';
7.b).
void main() {
runApp(MultiProvider(
providers: [ChangeNotifierProvider(create: (_) => Counter())],
child: MyApp(),
));}
If you have more than 1 state, you can put those into the providers list
Now you can access/change the state using the respective methods that we created on the Counter class
For an example if you want to access the counter state inside your widget
context.watch<Counter>().count
To increment the state
onPressed: () => context.read<Counter>().increment() | https://shiharadilshan.medium.com/state-management-in-flutter-apps-610e44f9d62a?source=read_next_recirc---------3---------------------ea09ea59_b984_4c6f_a09f_5650903d6dc9---------- | CC-MAIN-2022-21 | refinedweb | 717 | 65.83 |
This is a general buffer. More...
#include <ivideo/rndbuf.h>
Detailed Description
This is a general buffer.
Main creators of instances implementing this interface:
- csRenderBuffer::CreateRenderBuffer()
- csRenderBuffer::CreateIndexRenderBuffer()
- csRenderBuffer::CreateInterleavedRenderBuffers()
- See also:
- csRenderBuffer
Definition at line 173 of file rndbuf.h.
Member Function Documentation
Copy data to the render buffer.
- Remarks:
- Do not call on a locked buffer. Does not work with interleaved buffer, copy to master buffer instead.
Get type of buffer (static/dynamic).
Gets the number of components per element.
Gets the component type (float, int, etc).
Number of elements in a buffer.
Get the distance between two elements (in bytes, includes stride).
Get the master buffer in case this is an interleaved buffer.
The master buffer is the buffer that actually holds the data; while it can be used to retrieve or set data, it must not be used for actual rendering. Use the interleaved buffers instead.
Get the offset of the buffer (in bytes).
The highest index contained in this buffer, only valid for index buffers.
The lowest index contained in this buffer, only valid for index buffers.
Get the size of the buffer (in bytes).
Get the stride of the buffer (in bytes).
Get version.
Whether the buffer is an index buffer.
Releases the buffer.
After this all access to the buffer pointer is illegal.
Set callback object to use.
Set the buffer data.
This changes the internal pointer to the buffer data to buffer instead. It will also be returned by Lock(). It is the responsibility of the caller to ensure that the memory pointed to by data is valid for as long as the render buffer is used.
- Remarks:
- Do not call on a locked buffer. Does not work with interleaved buffer, set data on master buffer instead.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4/structiRenderBuffer.html | CC-MAIN-2014-10 | refinedweb | 318 | 61.43 |
Hi all. I'm running the following from the Radiant installation: % rake production db:bootstrap And get this error: rake aborted! undefined method `namespace' for main:Object /var/lib/gems/1.8/gems/radiant-0.6.1/Rakefile:7 Now, this line says: require 'tasks/rails' Any help or suggestions would be very appreciated,
on 2007-05-22 13:23
on 2007-05-22 17:06
On 5/22/07, Johan Rönnblom <johan.ronnblom@picsearch.com> wrote: > /var/lib/gems/1.8/gems/radiant-0.6.1/Rakefile:7 > > Now, this line says: > require 'tasks/rails' > > > Any help or suggestions would be very appreciated, Not a direct answer to your question, but I'd recommend that anyone doing ruby in earnest on a debian system install ruby from source rather than (or in addition to) the debian packages. The debian ruby package maintainers partition ruby in their own way, AND since they see some kind of conflict between the way they want to do package management, and the way rubygems does it, they don't support gems well if at all. What I've done on my ubuntu system is to install ruby from source in /usr/local and leave the debian packages installed for other packages which depend on them. I just use svn/make and gem to keep things up to date. -- Rick DeNatale My blog on Ruby
on 2007-05-22 19:32
Johan Rönnblom wrote: > /var/lib/gems/1.8/gems/radiant-0.6.1/Rakefile:7 Rake 0.7 added the concept of namespaces, so you probably don't have the latest version of rake. $ rake --version I have 0.7.3 on my box and it runs fine. You might try upgrading if you have something less than 0.7. | https://www.ruby-forum.com/topic/108914 | CC-MAIN-2018-09 | refinedweb | 296 | 62.98 |
THE SQL Server Blog Spot on the Web
I've mentioned before my company manages trade shows, and we've got a series of web sites managed by an application which uses a different SQL Server database for each site, with a master database (I'll call it Global, to differentiate it from SQL Server's master database). Well, we have a number of shows which have multiple show locations, and these use a parent-child set of databases, where information pertaining to all the shows is in the parent database, and then information to each specific location is in the child database.
There is information that is important to both parent and child, and that's kept in the parent database, and we have views in the child which return the data from the parent database that's specific to the child. The problem is when the application is changed, and the views need to be modified. Each child site has a technically different view based on it's show ID and the name of the parent database.
Yesterday I had to make such a change, and I decided a PowerShell script was the best way to approach the problem. Now, there's a table defining all the parent/child relationships in the Global database, so the first step is to query that table to get the necessary information:
#build_exibitors_view.ps1#This script will recreate the 'exhibitors' view in all child databases$cn = new-object system.data.SqlClient.SqlConnection("Data Source=MyServer\MyInstance;Integrated Security=TRUE;Initial Catalog=Global");$ds = new-object "System.Data.DataSet" "dsChildSites"$q = "SELECT [childShowID]"$q = $q + " ,[parentDBName]"$q = $q + " ,[childDBName]"$q = $q + " FROM [Global].[dbo].[ParentChild]"$da = new-object "System.Data.SqlClient.SqlDataAdapter" ($q, $cn)$da.Fill($ds)
Now, once the DataSet is populated it's time to loop through the results building the new view for each child database. I create a DataTable from the DataSet, then use the FOREACH-OBJECT cmdlet to step through the results, and then create variables for the values returned for each iteration.
$dtChild = new-object "System.Data.DataTable" "dsChildSites"$dtChild = $ds.Tables[0]$dtChild | FOREACH-OBJECT { $pDB = $_.parentDBName $cDB = $_.childDBName $cshowID = $_.childShowID
Inside the loop I need to connect to each child database and first drop the existing view. I concatenate the name of the child database to the connection string for the SqlCommand object.
$cn = new-object System.Data.SqlClient.SqlConnection("Data Source=MyServer\MyInstance;Integrated Security=TRUE;Initial Catalog=" + $cDB) $cn.Open() $sql = "IF EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[exhibitors]')) DROP VIEW [dbo].[exhibitors]" $cmd = new-object "System.Data.SqlClient.SqlCommand" ($sql, $cn) $cmd.ExecuteNonQuery() | out-null
Once the existing view is deleted I can create the new one. Here I concatenate the name of the parent database in the FROM clause, and filter the resultset based on the showID of the child show.
$sql = "CREATE VIEW [dbo].[exhibitors]" $sql = $sql + " AS" $sql = $sql + " SELECT exhibID, name, description" $sql = $sql + " FROM " + $pDB + ".dbo.exhibitors" $sql = $sql + " WHERE (showID = " + [string]$cshowID + ")"
Now that the view has been built for the child database I can execute the query to create it.
$cmd2 = new-object "System.Data.SqlClient.SqlCommand" ($sql, $cn) $cmd2.ExecuteNonQuery() | out-null $cn.Close() }
(Note that the actual view used is much more complicated than this, but I wanted to share the technique.) Rather than use an editor to make changes to a couple of dozen views across as many databases I used PowerShell to automate the process and made the changes in a few seconds.
Allen
If you would like to receive an email when updates are made to this post, please register here
RSS
Why not just use T-SQL? Would that be simpler?
I consider myself a scripting guy, but I must admit that I have not seen any compelling reason for using PowerShell in general. Most PS examples from MS seem to be rather contrived, and have better alternative solutions.
I could have used T-SQL. We're incorporating a variety of technology tools together here to automate processes, though, and by doing this in PowerShell the network admins who don't know T-SQL can run scripts like this when setting up a new trade show web site without running SSMS. They just have to run a set of scripts and the site is done.
The best way to accomplish any given task? It depends.
@Linchi - "I have not seen any compelling reason for using PowerShell in general"
With respect, I don't think you can have been looking very hard then. How about scripting out some objects in a database?
[reflection.assembly]::LoadwithPartialName("Microsoft.SQLServer.SMO") | out-Null
$server = New-Object 'Microsoft.sqlserver.management.smo.server' '<servername>'
$server.JobServer.jobs| foreach-Object {$_.script()}
This is the kind of thing that PS with SMO is superb at.
Moff;
How's that so different from the following:
using System;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Smo.Agent;
using Microsoft.SqlServer.Management.Common;
using System.Collections.Specialized;
class Junk {
static void Main() {
Server svr = new Server(new ServerConnection("server", "user", "password"));
foreach (Job job in svr.JobServer.Jobs) {
StringCollection sc = job.Script();
foreach (string s in sc) { Console.WriteLine(s); }
}
}
}
I mean, the key to 'scripting' script is to understand SMO. Once you are comfortable with SMO, you can use any .NET language. PS doesn't really provide much added value in this particular example. To me and for this particular example, all that PS provides is syntactic sugar--that is of very limited usage scope--one can do without.
I'd love to see a more compelling example. (Of course, there is a vendor-related compelling reason that I fully understand. That is, if MS pushes PS to all its platforms an apps, and make it ubiquitous. But evne then not all apps are written by MS. Personally I think MS carves the problem space wrong with PS.)
@ Linchi, it differs by about a fifth :-)
YMMV, I just prefer the simplicity and neatness of PS.
I was being a bit facetious. I don't mind PS being so dependent on .NET and on these specialized .NET classes. It's fine to be able to pipe specialized .NET objects, but I wish PS were better in dealing with the traditional byte streams. More specifically, I'd like it to much better blend regular expressions into byte stream processing.
PS 2.0 does look a lot better. At least, it appears to be supporting background jobs and Script cmdlets.
Linchi, in the presentations I've done both in local user group sessions and at the last two PASS conferences where I've presented SMO sessions I've gotten feedback that many DBA's aren't allowed to have Visual Studio on their desktops. They're not developers, they're administrators. For them, PowerShell is not just syntactic sugar, it's a way for pure administrators to automate processes.
Allen;
But you don't need Visual Studio to write admin utilties in VB.NET or C#. I've written many console utilities in C# and never used VS.
I'm not here to try to convince anybody to agree that PS is bad. It's not that bad. But I personally don't think there is much gain for the kind of effort MS put in to develop PS. For the same amouont of investment, they could have just built on top of what has proven in many existing environments (e.g. a Korn shell, bash, or Perl), adding whatever they wanted to add (e.g. piping .NET objects).
We'll see how it plays out. But I doubt these pure administrators will really embrace PS if they are not real scripting guys. And if they are real scripting guys on the Windows platform, they would have to be versed in .NET. To be really versed in .NET, they need to know VB.NET or C#, and if they become comfortable in C# or VB.NET, they may go, "what the heck with this PS?" when they can go all way with VB.NET or C# without being shackled by the PS paradigm. Now, it may appear that I'm treating C# as a scripting language, in a way I am because of the .NET classes. Of course, C# is not a scripting language, and neither is PS. C# is too strongly typed and static for scripting, whereas I feel much handicapped or being put into a straightjack with PS. I'm baiscally a guy who likes to write quick dirty throw-away scripts to get a job done, and for that I want a scripting language that is powerful enough that I know I can always do it no matter what scripting job I may run into, and yet for many jobs often quicker to write a new dirty script than to find an old script that's tucked somewhere in an older folder.
I think this may be very cool becuase I am a .NET developer and I want scripts to run on any win machine to upsert records from access databases to SQL 2008. I've been looking for a script like this. As mentioned, there are few interesting PS scripts out there. Anyway, the reader the author wrote for is probably me and I'm looking forward to trying these ideas out! I already do a bunch of VBS scripts and not having the ease and power of .NET is a pain to anyone who scripts with wscript. Thanks! Great work.
I also never used PowerShell instead I wrote a set of functions in using VB script and kept reusing them whenever there was need for automation.
I have been impressed with PS. Although I may be easily impressed. for us younger admins who have not got into VB that much and need a versitle scripting tool PS works great. If you compare many VB scripts to PS scripts PS scripts are shorter, easier to write and do the job really well. yes you have to know some .net to get the real work done, so what. PS is flexible and quickly becoming a great tool. Especially now that MS has included SQL support more directly, Data Protection Management support and even more. Anyone who has not looked at it should do so. Anyone who passed it up should give it a serious second look.
Allen,
Is there a way to script Windows Cluster Installation with PowerShell (or any other way)?
It would be really nice Disaster Recovery Script that builds out the cluster & configures it too.
It is a fantasy. Don't forget any code needs debug feature (VS), reuse (oob) and easy to write correct syntax (C#). Powershell has none.
Mark V
I'm sure there's a way to do it, but I'm a SQL Server guy, not a Cluster guy. Check the blogosphere, and I'm sure you'll find someone who's done it.
Jason, you're thinking like a developer. PowerShell is an administrator's scripting tool. Admins have different requirements, and while there are ways of debugging PowerShell (), the points you make are much more appropriate to a development environment than an administrative one. I personally would never write an application in PowerShell, but I use it a lot to automate the administrative tasks I do.
I'm with Linchi Shea on this one. Doesn't seem to offer much that can't be done alreday quite simply.
Oh, I had both Administrator and Developer positions. I have been programming over 20 years, from oldest language to latest language. I have presented to SQLPASS several times. For things I cannot accomplish in T-SQL, say WMI to get Perfmon data, I will say you can use PS. However the debug feature is lagging behind for an experienced programmer. Non-programmer can spend tons of time to try to get their codes to work. Be honest, how many non-DBA admin can code? or even use a copied script correctly against databases. Ask what is a connect-string, hum? Do you really want to encourage them to run things against database? Even an inappropriate select can bring you a unwelcomed shared lock.
I don't know what you mean a development environment of PS. You are it. The user (admin) is the developer. Do you expect a company to hire a PS developer? That is not going to happen.
()
Currently, to debug PS, it is the older-timer technique I used often 20 years ago. That is what a lot of experienced programmers look down. It is a waste of time, but I agree you can eventually get it to work.
I have done high level languages like C#, Java, I had done low level programming like Assembly. The query syntax is like .Net. There is no debugger, watch, pause, intellisense. Anyway, if you have not seen it, you would not understand.
For example, I don't see any equivalent error-handling like in higher level language try-catch. In case, $cn.Open() failed, the nightly scheduled admin job might end up no where, nobody knows.
This is something people need to know, how it compares to T-SQL, C#, use it only at advantages with its associated cost.
Jason, this blog post is two years old, and identifies a method of gathering data from SQL Server using PowerShell. PowerShell 2.0 has Try-Catch-Finally in the language. It's evident that you don't care to use or learn PowerShell, and that's fine. Others will benefit from learning it.
Allen, No. It has its use. The discussion is for "powershell in general" only post here. For example, if you try to access OS data, which you can use WMI in either C# or powershell. It has its short-comings that people need to be aware. Like I said again, "Use it only at its advantages with the associated cost." The debugger is also invented now such as Idera PowershellPlus. Hopefully, everything else will catch on so we can do real programming in this environment.
For people who hasn't been programming long enough, for example, I have not seen Powershell solve the code-reuse issue. It is still old-time copy-paste or include. C# went over the evolution phase. The inherited problem with scripting is best illustrated in ASP. If PS does not solve these problems, it will have a fate like ASP (active server page .asp is gone now forever).
This, for example, two years after powershell's invention, that operator does not seem to work correctly. (I spent a day try to get correct answer) which is a very simple concept in other languages.
# PowerShell cmdlet to list the files of C:\
$i=0
$GciFiles = Get-ChildItem "c:\" -force |where {(($_.attributes -band 0x20) -eq 0x20)}
foreach ($file in $GciFiles) {$i++}
$GciFiles |sort |ft name, attributes -auto
# PowerShell cmdlet to list the System files in the root of C:\
$GciFiles = Get-ChildItem "c:\" -force |where {(($_.attributes -band 0x20) -eq 32)} | http://sqlblog.com/blogs/allen_white/archive/2008/01/25/using-powershell-and-sql-server-together.aspx | CC-MAIN-2014-35 | refinedweb | 2,526 | 66.94 |
SoapDocumentMethodAttribute.OneWay Property
Assembly: System.Web.Services (in system.web.services.dll)
Property Valuetrue if the XML Web service client does not wait for the Web server to completely process an XML Web service method. The default value is false.
When an XML Web service method has the OneWay property set to true, the XML Web service client does not have to wait for the Web server to finish processing the XML Web service method. As soon as the Web server has deserialized the SoapServerMessage, but before invoking the XML Web service method, the server returns an HTTP 202 status code. A HTTP 202 status code indicates to the client that the Web server has started processing the message. Therefore, an XML Web service client receives no acknowledgment that the Web server successfully processed the message.
One-way methods cannot have a return value or any out parameters..
The following code example is an XML Web service method that does not require the client to wait for the XML Web service method to complete. Therefore, the sample sets the OneWay property to true.
<%@ WebService Language="C#" Class="Stats" %> using System.Web.Services; using System.Web.Services.Protocols; public class Stats: WebService { [ SoapDocumentMethod(OneWay=true) ] [ WebMethod(Description="Starts nightly statistics batch process.") ] public void StartStatsCrunch() { // Begin nightly statistics crunching process. // A one-way method cannot have return. | https://msdn.microsoft.com/en-us/library/system.web.services.protocols.soapdocumentmethodattribute.oneway(VS.80).aspx | CC-MAIN-2016-07 | refinedweb | 226 | 58.38 |
This is a C Program to implement XOR list. An XOR linked list is a data structure used in computer programming. It takes advantage of the bitwise XOR operation to decrease storage requirements for doubly linked lists.
Here is source code of the C Program to Implement Xor Linked List. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
#include <stdio.h>
#include <stdlib.h>
// Node structure of a memory efficient doubly linked list
struct node {
int data;
struct node* npx; /* XOR of next and previous node */
};
/* returns XORed value of the node addresses */
struct node* XOR(struct node *a, struct node *b) {
return (struct node*) ((unsigned int) (a) ^ (unsigned int) (b));
}
/* Insert a node at the begining of the XORed linked list and makes the
newly inserted node as head */
void insert(struct node **head_ref, int data) {
// Allocate memory for new node
struct node *new_node = (struct node *) malloc(sizeof(struct node));
new_node->data = data;
/* Since new node is being inserted at the begining, npx of new node
will always be XOR of current head and NULL */
new_node->npx = XOR(*head_ref, NULL);
/* If linked list is not empty, then npx of current head node will be XOR
of new node and node next to current head */
if (*head_ref != NULL) {
// *(head_ref)->npx is XOR of NULL and next. So if we do XOR of
// it with NULL, we get next
struct node* next = XOR((*head_ref)->npx, NULL);
(*head_ref)->npx = XOR(new_node, next);
}
// Change head
*head_ref = new_node;
}
// prints contents of doubly linked list in forward direction
void printList(struct node *head) {
struct node *curr = head;
struct node *prev = NULL;
struct node *next;
printf("Following are the nodes of Linked List: \n");
while (curr != NULL) {
// print current node
printf("%d ", curr->data);
// get address of next node: curr->npx is next^prev, so curr->npx^prev
// will be next^prev^prev which is next
next = XOR(prev, curr->npx);
// update prev and curr for next iteration
prev = curr;
curr = next;
}
}
// Driver program to test above functions
int main() {
/* Create following Doubly Linked List
head-->40<-->30<-->20<-->10 */
struct node *head = NULL;
insert(&head, 10);
insert(&head, 20);
insert(&head, 30);
insert(&head, 40);
// print the created list
printList(head);
return (0);
}
Output:
$ gcc XORList.c $ ./a.out Following are the nodes of Linked List: 40 30 20 10
Sanfoundry Global Education & Learning Series – 1000 C Programs.
Here’s the list of Best Reference Books in C Programming, Data Structures and Algorithms. | https://www.sanfoundry.com/c-program-implement-xor-linked-list/ | CC-MAIN-2018-13 | refinedweb | 419 | 58.66 |
The best answers to the question “How do I include a JavaScript file in another JavaScript file?” in the category Dev.
QUESTION:
Is there something in JavaScript similar to
@import in CSS that allows you to include a JavaScript file inside another JavaScript file?
ANSWER::.
…
ANSWER:
The old versions of JavaScript had no import, include, or require, so many different approaches to this problem have been developed.
But since 2015 (ES6), JavaScript has had the ES6 modules standard to import modules in Node.js, which is also supported by most modern browsers.
For compatibility with older browsers, build tools like Webpack and Rollup and/or transpilation tools like Babel can be used.
ES6 Modules
ECMAScript (ES6) modules have been supported in Node.js since v8.5, with the
--experimental-modules flag, and since at least Node.js v13.8.0 without the flag. To enable “ESM” (vs. Node.js’s previous CommonJS-style module system [“CJS”]) you either use
"type": "module" in
package.json or give the files the extension
.mjs. (Similarly, modules written with Node.js’s previous CJS module can be named
.cjs if your default is ESM.)
Using
package.json:
{ "type": "module" }
Then
module.js:
export function hello() { return "Hello"; }
Then
main.js:
import { hello } from './module.js'; let val = hello(); // val is "Hello";
Using
.mjs, you’d have
module.mjs:
export function hello() { return "Hello"; }
Then
main.mjs:
import { hello } from './module.mjs'; let val = hello(); // val is "Hello";
ECMAScript modules in browsers
Browsers have had support for loading ECMAScript modules directly (no tools like Webpack required) since Safari 10.1, Chrome 61, Firefox 60, and Edge 16. Check the current support at caniuse. There is no need to use Node.js’
.mjs extension; browsers completely ignore file extensions on modules/scripts.
<script type="module"> import { hello } from './hello.mjs'; // Or it could be simply `hello.js` hello('world'); </script>
// hello.mjs -- or it could be simply `hello.js` export function hello(text) { const div = document.createElement('div'); div.textContent = `Hello ${text}`; document.body.appendChild(div); }
Read more at
Dynamic imports in browsers
Dynamic imports let the script load other scripts as needed:
<script type="module"> import('hello.mjs').then(module => { module.hello('world'); }); </script>
Read more at
Node.js require
The older CJS module style, still widely used in Node.js, is the
module.exports/
require system.
// mymodule.js module.exports = { hello: function() { return "Hello"; } }
// server.js const myModule = require('./mymodule'); let val = myModule.hello(); // val is "Hello"
There are other ways for JavaScript to include external JavaScript contents in browsers that do not require preprocessing.
AJAX Loading
You could load an additional script with an AJAX call and then use
eval to run it. This is the most straightforward way, but it is limited to your domain because of the JavaScript sandbox security model. Using
eval also opens the door to bugs, hacks and security issues.
Fetch Loading
Like Dynamic Imports you can load one or many scripts with a
fetch call using promises to control order of execution for script dependencies using the Fetch Inject library:
fetchInject([ '' ]).then(() => { console.log(`Finish in less than ${moment().endOf('year').fromNow(true)}`) })
jQuery Loading
The jQuery library provides loading functionality in one line:
$.getScript("my_lovely_script.js", function() { alert("Script loaded but not necessarily executed."); });
Dynamic Script Loading
You could add a script tag with the script URL into the HTML. To avoid the overhead of jQuery, this is an ideal solution.
The script can even reside on a different server. Furthermore, the browser evaluates the code. The
<script> tag can be injected into either the web page
<head>, or inserted just before the closing
</body> tag.
Here is an example of how this could work:
function dynamicallyLoadScript(url) { var script = document.createElement("script"); // create a script DOM node script.src = url; // set its src to the provided URL document.head.appendChild(script); // add it to the end of the head section of the page (could change 'head' to 'body' to add it to the end of the body section instead) }
This function will add a new
<script> tag to the end of the head section of the page, where the
src attribute is set to the URL which is given to the function as the first parameter.
Both of these solutions are discussed and illustrated in JavaScript Madness: Dynamic Script Loading.
Detecting when the script has been executed
Now, there is a big issue you must know about. Doing that implies that you remotely load the code. Modern web browsers will load the file and keep executing your current script because they load everything asynchronously to improve performance. (This applies to both the jQuery method and the manual dynamic script loading method.) an event to run a callback function when the script is loaded. So you can put all the code using the remote library in the callback function. For example:
function loadScript(url, callback) { // Adding the script tag to the head as suggested before var head = document.head; var script = document.createElement('script'); script.type="text/javascript"; script.src = url; // Then bind the event to the callback function. // There are several events for cross browser compatibility. script.onreadystatechange = callback; script.onload = callback; // Fire the loading head.appendChild(script); }
Then you write the code you want to use AFTER the script is loaded in a lambda function:
var myPrettyCode = function() { // Here, do whatever you want };
Then you run all that:
loadScript("my_lovely_script.js", myPrettyCode);
Note that the script may execute after the DOM has loaded, or before, depending on the browser and whether you included the line
script.async = false;. There’s a great article on Javascript loading in general which discusses this.
Source Code Merge/Preprocessing
As mentioned at the top of this answer, many developers use build/transpilation tool(s) like Parcel, Webpack, or Babel in their projects, allowing them to use upcoming JavaScript syntax, provide backward compatibility for older browsers, combine files, minify, perform code splitting etc.
ANSWER:();
ANSWER:></script>").attr("src", url)); /* Note that following line of code is incorrect because it doesn't escape the * HTML attribute src correctly and will fail if `url` contains special characters: * $("head").append('<script src="' + url + '"><></script').attr('src', script)); import_js_imported.push(script); } } }); })(jQuery);
So all you would need to do to import JavaScript is:
$.import_js('/path_to_project/scripts/somefunctions.js');
I also made a simple test for this at Example.). | https://rotadev.com/how-do-i-include-a-javascript-file-in-another-javascript-file-dev/ | CC-MAIN-2022-40 | refinedweb | 1,065 | 59.6 |
. The book is a collection of items from previous books that the pair has worked on together. As such, if you do not own any of the previous books this one makes a great addition to your library.
The book covers 101 best practices guidelines for your C++ code. It does not cover things like indentation level and style of comments, the things that we have all come to consider to be part and parcel with a standards document. In fact, it explicitly avoids talking about those things and says that they should be internally consistent within each file. "C++ Coding Standards" covers topics on program design, object lifetime management, namespaces, generic programming (templates) and the standard library. | http://blog.emptycrate.com/node/355 | crawl-002 | refinedweb | 118 | 68.4 |
How To Beep In Python - 5 Simple Ways
In this tutorial I will show you 5 simple ways to generate a beeping sound in Python.
To generate a beeping sound in Python you have the following options:
- Use the bell character on the terminal
- Use AppKit to play MacOS system sounds
- Use winsound to play Windows system sounds
- Use pygame to play custom sound files
- Use simpleaudio to play custom sound files
- Use the beepy package
To put it in a more a pythonic way: how to make your machine go PING! (HINT: check the end of the article if you don’t get the reference.)
1. Using The Bell On The Terminal
There is a so-called bell character that you can use to issue a warning on a terminal. It is a nonprintable control code character, with the ASCII character code of 0x07 (BEL). You can trigger it by simply sending it to the terminal (note the backslash):
print('\a')
This is probably the most simple way of sounding a beep, though it is not 100% guaranteed to work on all systems. Most UNIX-like operating systems like macOS and Linux will recognize it, but depending on the current settings the bell might be muted or be represented as a flash on the screen (visual bell).
2. Using AppKit.NSBeep On MacOS
If you’re on a Mac, you can tap into the Objective-C libraries to generate a sound.
First you’ll need to install the PyObjC library:
pip install -U PyObjC
Then you can simply use the
AppKit interface the ring the default system sound, like so:
import AppKit AppKit.NSBeep()
3. Using winsound On Windows
On Windows operating systems you can use the
winsound library.
winsound needs no installation it is a builtin module on windows, so you should be able to access it by default
windound a have a handy
Beep API, you can even choose the duration and the frequency of the beep. This is how you generate a 440Hz sound that lasts 500 milliseconds:
import winsound winsound.Beep(440, 500)
You can also play different windows system sound effects using the
PlaySound method:
import winsound winsound.PlaySound("SystemExclamation", winsound.SND_ALIAS)
The same API can be used to play custom sound files using the
SND_FILENAME flag instead of
SND_ALIAS:
import winsound winsound.PlaySound("beep.wav", winsound.SND_FILENAME)
4. Playing Sound Files With pygame
Pygame is a modular Python library for developing video games. It provides a portable, cross-platform solution for a lot of video game and media related tasks, one of which is playing sound files.
To take advantage of this feature, first you’ll need to install pygame:
pip install pygame
Then you can simply use the
mixer the play an arbitrary sound file:
from pygame import mixer mixer.init() sound=mixer.Sound("bell.wav") sound.play()
Just like with the previous solution, you’ll need to provide you own sound file for this to work. This API supports OGG and WAV files.
5. Playing Sound Files With Simpleaudio
Simpleaudio is a cross-platform audio library for Python, you can use it to play audio files on Windows, OSX and Linux as well.
To install the simpleaudio package simply run:
pip install simpleaudio
Then use it to play the desired sound file:
import simpleaudio wave_obj = simpleaudio.WaveObject.from_wave_file("bell.wav") play_obj = wave_obj.play() play_obj.wait_done()
6. Use Package Made For Cross-Platform Beeping - Beepy
If you want a ready-made solution you can check out the
beepy package. Basically it’s a thin wrapper around simpleaudio, that comes bundled together with a few audio files.
As always, you can install it with pip:
pip install beepy
And then playing a beep sound is as simple as:
import beepy beep(sound="ping")
Summary
As you can see there are several different ways to go about beeping in Python, but which one is the best?
If you just want a quick and dirty solution I’d recommend trying to sound the terminal bell. If you want something more fancy or robust I’d go with winsound on windows or AppKit on a Mac. If you need a cross-platform solution your best bet will be using simpleaudio or pygame, to get a custom sound file played.
Congratulations, now you’ll be able to turn your computer into “The Machine That Goes PING”. | https://pythonin1minute.com/how-to-beep-in-python/ | CC-MAIN-2022-21 | refinedweb | 729 | 55.88 |
>>....
I'm glad to hear it (Score:5, Informative)
Re:As if enough people weren't already confused... (Score:4, Informative)
Re:Looks good, but a little hampered by C++ (Score:1, Informative)
void main() {
class local {
public: void hello() { printf("hello world\n"); }
};
local::hello();
}
Oh, and if you are worried about cluttering up "the namespace", that's what namespace MySpace { } is for
Re:GPL 2 (Score:3, Informative)
Re:GPL 2 (Score:2, Informative)
No you can't.
Re:I'm thinking (Score:5, Informative)
And, if there was, well it's under the GPL now, and I'm sure someone would have added / corrected that mistake.:As if enough people weren't already confused... (Score:4, Informative)
Agreed it does look to take a lot of the grunt work out of writing parallel-processing code. There are supposedly Java:This and XEN (Score:1, Informative):
OSes:
Compilers:
P.S. Slashdot pulled out all the trademark symbols, and doesn't support the sup tag, so you'll just have to picture them in all the appropriate spots.
:P:Nice Offering (Score:2, Informative)
Re:Looks good, but a little hampered by C++ (Score:3, Informative)
Local _functions_ aren't in C++, but may be a GCC extension - which might be confusing you., Microsoft and GNU.
Threading Building Blocks supports the following processors:
* Non Intel processors compatible with the above processors.
Re:Compatibility kinda sucks (Score:4, Informative)
Re:PS3? (Score:4, Informative)
Re:GPLv2 only (Score:3, Informative)
I would not. The verbatim GPLv2 states: verbatim GPLv2 does not prevent the licensor from specifying GPLv2, and programs licensed under "GPLv2" without "any later version" are expressly contemplated by the terms of the license.
They would also be in violation of the GPLv2 by having modified the version they are distributing, contrary to the terms of the license for distribution.
But they haven't.
However, anyone with a copy of this existing version would seemingly have the GPLv2 license in its original glory (and hence GPLv3 may apply).
And they do but it doesn't.
I haven't looked at the details; this is based on what you've just said. It's very interesting.
That is not an excuse. Your speculation concerning the terms of the GPLv2 had no basis in the grandparent's post.
A job for Fortran . . . (Score:3, Informative)
Fortran 90 and later already have the structures for this (Forall, etc).
*sigh*
hawk, who hasn't written a line in over two years
Re:I'm glad to hear it (Score:2, Informative):Memory requirements - bummer scalable. The queue, vector and hash table we provide are much better choices in a threaded application (with or without the other features of TBB) than using STL containers.
The scalable memory allocator is definitely a gem. The library for it is completely separate from the rest of TBB - so definitely a good place to start if you have a threaded application which still calls malloc() | http://developers.slashdot.org/story/07/07/25/1324221/intel-releases-threading-library-under-gpl-2/informative-comments | CC-MAIN-2014-49 | refinedweb | 497 | 54.83 |
How to create and use library projects in Android
Lars Vogel
Version 1.3
03.06.2013
Android Library Projects
This tutorial describes how to create and use library projects in Android. The tutorial is based on Eclipse 4.2, Java 1.6 and Android 4.2.
Android library projects allow to store source code and resources which are used by several other Android projects. The Android development tools compile the content of library into the Android project by creating a JAR file.
Library projects cannot be compiled to Android applications directly.
Using library projects help you to structure your application code. Also more and more important Open Source libraries are available for Android. Understanding library projects is therefore important for every Android programmer.
If the Android development tools build a project which uses a library project, it also builds the components of the library and adds them to the .apk file of the compiled application.
Therefore a library project can be considered to be a compile-time artifact. A Android library project can contain Java classes, Android components and resources. Only assets are not supported.
To create a library project, set the Mark this project as library flag in the Android project generation wizard.
To use such a library, select the generated project, right-click on it and select properties. On the Android tab add the library project to it.
The library project must declare all its components, e.g. activities, service, etc. via the AndroidManifest.xml file. The application which uses the library must also declare all the used components via the AndroidManifest.xml file.
Other projects can now use this library project. This can also be set via the properties of the corresponding project.
The Android development tools merges the resources of a library project with the resources of the application project. In the case that a resources ID is defined several times, the tools select the resource from the application, or the library with highest priority, and discard the other resource.
To use a Java library inside your Android project directly, you can create a folder called libs and place your JAR into this folder.
The following example assumes that you have created a normal Android project called com.example.android.rssfeed based on Android Fragments tutorial .
Create a new Android project called com.example.android.rssfeedlibrary. Do not need to create an activity.
Our library project will not contribute Android components but a data model and a method to get the number of instances. We will provide RSSfeed data. The following gives a short introduction into RSS.
An RSS document is an XML file which can be used to publish blog entries and news. The format of the XML file is specified via the RSS specification.
RSS stands for Really Simple Syndication (in version 2.0 of the RSS specification).
Typically a RSS file is provided by a webserver, which RSS client read. These RSS clients parse the file and display it.
Create an RssItem class which can store data of an RSS entry.
package com.example.android.rssfeedlibrary; public class RssItem { private String pubDate; private String description; private String link; private String title; }
Use Eclipse code generation capabilities which can be found in the menu under Source to + "]"; }
}
Create a new class called RssFeedProvider with a static method to return a list of RssItem objects.
package com.example.android.rssfeedlibrary; import java.util.ArrayList; import java.util.List; public class RssFeedProvider { // Helper method to get a list // of RssItems public static List<RssItem> parse(String rssFeed) { List<RssItem> list = new ArrayList<RssItem>(); // Create some example data RssItem item = new RssItem("test1", "l1"); list.add(item); item = new RssItem("test2", "l2"); list.add(item); // TODO Create a few more instances of RssItem return list; } }
Solve the TODOs to create example instances of the RssItem class and add it to the list. This method does currently only return test data.
Right-click on the Android project and select Properties. Ensure that the is Library flag is set.
In your application project defines that you want to use the library project via the project properties.
Use the static method of RssFeedProvider to get the list of RssItem objects and display the number in your DetailFragment instead of current system time.
To send the new data change your MyListFragment class.
// Update the method updateDetail() { // more code.... // Reading the RSS items List<RssItem> list = RssFeedProvider .parse(""); String text = String.valueOf(list.size()); // TODO send text to the Detail fragment
Please login in order to leave a comment. | http://www.chupamobile.com/tutorial-android/android-library-projects-tutorial-211 | CC-MAIN-2017-09 | refinedweb | 758 | 58.99 |
I’ve had enough of Rails testing, coming from other languages/frameworks that know the importance of testing, the more I test in Rails the more I miss ‘em. I don’t care how quick and easy I can make a web app in Rails, when it comes down to it, language, framework, whatever, it’s all about testing.
So I decided to try real testing in Rails. By real testing, I mean tests where each test, only tests 1 thing (class). The UserTest is the test for just the User model. The UsersControllerTest is the test for just the UsersController. You say yeah I know, that’s what I’m already doing. Not if you’re using Rails you aren’t. In Rails functional and unit tests are not real unit tests; a real unit test tests a single unit i.e. 1 class, all collaborators (other objects, database, etc) are mocked up. Forget the word functional, controllers and models are classes and you unit test classes. Outside Rails the word functional means more like what Rails calls integration tests, courser grained/high-level tests that test more end-to-end functionality that just a single unit (class).
In essence both functional and unit testing in rails are like mini-integration tests since you’re not just testing one controller, your testing that controller, the models/libraries in interacts with, the database adapters your model interacts with, etc, etc. This means that a bug in a model that a controller interacts with could cause your controller test to break. Or a bug in a Rails database adapter could cause your model tests to fail. Or even better, a change to one fixture for one model can cause a test to fail in another model that collaborates with that model. An example:
# A model class User < ActiveRecord::Base def foo f = Foo.new f.blah end end # Foo is a library defined in 'lib' class Foo def blah 'blah' end end
User collaborates with Foo. In rails testing you mock nothing, so your tests for User are dependent on the actual implementation of Foo. Uh-oh there’s a bug in Foo#blah, that causes the Foo tests to break…but wait that’s not all that’s going to break, so does your User tests because User collaborates with Foo. If we had mocked out Foo in our User test then the only failing test would be in Foo’s tests. Too bad we didn’t write our User test to just test the User model and not indirectly test its collaborators (Foo).
instead of
def test_should_return_blah_when_sent_foo user = User.new assert_equal 'blah', user.foo end
we should of did (using Mocha a mocking library for ruby)
def test_should_return_blah_when_sent_foo foo = mock foo.expects(:blah) Foo.expects(:new).returns foo user = User.new user.foo end
Foo is mocked out, we care nothing about its implementation, just its interface i.e. it responds to #blah. Therefore a change in Foo’s implementation does not effect our User test even though User collaborates with Foo.
This gets even more fun when your UsersController functional tests are failing because you changed a single attribute value in one of your User model fixtures and now your assertions in your functional tests are failing because your UsersController collaborates with your User model and loads in its fixtures.
You want to talk about brittle! Now that’s brittle baby!
A lot of Rails users are running into various bugs with Rails not cleaning up your database tables on
teardown of your Test::Unit::TestCases leading to cases were the test passes when I run it individually however
when I run it with 'rake test:units’ it fails. Some of my fellow t-boters' (myself NOT included) solution:
Add the following class method (pseudo code) to all tests
def self.load_all_fixtures fixtures :users, :articles, etc., etc. end
Defined in ‘test/test_helper.rb’, it does what you think it does. Loads every fixture for every model at the start of every test (not each test method because we use transactional fixtures). That is pure trash! No more strange bugs but any real tester would be disgusted. I just love sitting around watching tests run all day, that’s a lot more fun that actually coding features.
Ok, then lets see what I really should be doing then…
What we want is each of our Test::Unit::TestCases to test 1 class and 1 class only. I don’t want it indirectly testing its collaborators. So I want to mock all objects that it collaborates with.
An example for testing the #search action in the UsersController
def test_should_find_all_users_whos_name_matches_the_given_search_phrase_on_GET_to_search users = [] User.expects(:find).with(:all, :conditions => ['name like ?', '%a%']).returns users get :search, :q => 'a' assert_equal users, assigns(:users) assert_response :success assert_template 'search' end class UsersController def search @users = User.find :all, :conditions => ['name like ?', "%#{params[:q]}%"] end end
The above functional test uses mocha again.
This test is written in an interaction based style as opposed to a state based style, all I care about is that the #search action sends #find to the User class with the given parameters and assigns the return value it to an instance variable named ‘users’. I’m not going to iterate through all the results and run regexp’s against each of their names. According to the Rails doc, calling #find on an ActiveRecord::Base model with those parameters will generate the SQL that will find records according to how I want to search (wildcard matching on the name column in the users table). Rails has already tested #find, I don’t need to indirectly test it in my UsersControllerTest. At the end of the test all my expectations set on my mocks will be verified automatically by Mocha, if any expectation is not met the test will fail.
This test can run without any tables in your test database (it cant run without a test database at all because somewhere when running a functional test Rails tries to connect to the test database even though this functional tests specifies it uses no fixtures, guess rails needs to initialize its test environment)
Here at t-bot my colleague and I have used this approach in all our functional tests in a recent application and we can run them all without any tables in our test database. This would be a huge benefit in terms of speed in any other framework that knew what real testing was about, however in Rails it’s necessary that Rails is initialized and that we have a database running every time you run a test; so its more of a I knew we could do it, but in any other framework our functional tests would absolutely smoke. No more watching tests run, now we can actually get some work done.
But wait…the test is exactly the same as the implementation. Thats no good.
This is true, the test is coupled to the implementation and the test is brittle, making refactoring basically not possible at all without breaking your tests. Also you could just write the implementation wrong, but I’d argue that when testing using mocks you still need more course-grained/high level end-to-end tests to catch instances like that.
What is implementation? Lets call implementation just the objects state i.e. instance variables and not how it interacts with other objects (behavior), then our tests are no longer coupled to an objects implementation because we only care about the messages its sends to other objects not their state. When testing like this the code results in a tell don’t ask (real OO) style of coding in which objects care very little about each other’s implementation i.e. state.
By mocking all collaborators, a bug in one class will only cause the test for that class to fail, it will not ripple out into any of its collaborators because all the tests for the collaborators have mocked it (the class with the failing test) out (like above, if we mocked Foo out in our User test, a Foo bug would of only broken the Foo test and not our User test as well). I argue that coupling a single test to its implementation better isolates your code from bugs in other parts of the code that it collaborates with. Not only that, someone else might be developing one of your object’s collaborators and you don’t even have access to its code. You don’t want to sit around and wait for them, so you agree on an interface w/ them for the object, and then you mock it out. Developers should never be waiting like that, it wastes time.
When first coming to Rails I never had seen fixtures used in a test environment, and thought (and still do) that they were ridiculous, not to mention the idea of a test environment and a test database. Come on, you mean everyone that runs these tests has to have MySQL installed with the right database and user accounts setup. Nice portability there Rails.
Agile development is about being agile; not about making sure you have MySQL installed correctly w/ the right user accounts, that you have the in-memory database setup correctly, or that your tests can contact some internal server running a fake credit card processing server, all that takes and wastes valuable time and slows your tests down big time, which results in more time lost.
In closing, I just want to say to, relax and stop drinking so much of the Rails Kool-aid. Everyone has just accepted Rails' fixtures and test environment and the fact they need MySQL running to run tests. Real testers way before Rails did testing right, no fixtures, no database, no ‘mini’ web server, no test environment; they knew and know what it means to be agile. Their tests ran on any machine because they had no external dependencies and their tests ran fast, making their development process and software that much more agile.
Rails didn’t do testing right and as a result this so called ‘agile’ framework isn’t as agile as it really could be.
References:
The difference between state-based and interaction-based testing by Martin Fowler, an object master and huge influence on Rails.
Nat Pryce - expert interaction-based tester and creator of jMock (one of the most popular mocking library in Java) and Ruby/Mock an extension to Test::Unit ‘implementation is really just state - not how an objects interacts with other objects’
Mocha - mocking and stubbing library for ruby | http://robots.thoughtbot.com/rails-dont-know-testing | CC-MAIN-2014-35 | refinedweb | 1,774 | 67.28 |
Let's say the right wheel moves b inches while the left wheel moved a inches during some time interval. Assume b > a. Then if we let the robot keep moving forever with those wheel speeds, the robot will make a big circle counterclockwise on the floor about some point X. We want to find X so that we can use a and b to find the new wheel positions on the floor.
We want to find ra, the distance from the left wheel to X.
We start by imagining the circumference of the circle the left wheel would travel in: ca. ca = 2*pi*ra. The right wheel's circle will have a radius of ra+w, where w is the wheelbase of the robot.
Since the two wheels are circling the same point on the floor with the axle always pointing toward the center of the circles, we know the distances a and b will constitute the same proportion of their respective circles. (If a goes all the way around its circle, b must have gone all the way around its circle too. Likewise if a goes 1/10th of the way around, etc.)
So we know that a/ca = b/cb, which is also the proportion of the circle we traveled (if a/ca = 0.10 then we've gone 10% around the circle). Substitute and solve and we get ra = w*a / (b-a).
Great! Now we know our radius of curvature.
Next, we want to find X, the actual point on the floor that we're circling. We start by finding theta, the angle we moved around the circle, by simply multiplying a/ca by 360 (for example, 0.10 * 360 = 36 degrees). If we want it in radians instead of degrees, it's even simpler, since multiplying by 2*PI cancels out the 2*PI in ca, leaving just a/ra.
We know that the point X is ra inches to the left of Pa, along a line which also passes through Pb. (X, Pa and Pb are 2-dimensional points). We get a vector from Pb to Pa with (Pa-Pb), make it length 1 by dividing by w, then multiply by ra so it's the right length to reach from Pa to X. Then we add it to Pa so that the vector ends at X.
Now we can find the updated wheel locations on the floor by rotating Pa and Pb by theta degrees around the point X. How do we do that? Well, we translate everything so that X is at the origin, then use a rotation matrix to rotate by theta, then translate back so that X is where it started:
Here's code that implements all of that (with saner variable names), plus the case where both wheels move the same amount):
// Just some math to turn wheel odometry into position updates
// Released into the public domain 3 June 2010
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#define PI (3.14159)
#define WHEELBASE (12.0)
// left wheel
double Lx = -WHEELBASE/2.0;
double Ly = 0.0;
// right wheel
double Rx = WHEELBASE/2.0;
double Ry = 0.0;
// given distances traveled by each wheel, updates the
// wheel position globals
void update_wheel_position(double l, double r) {
if (fabs(r - l) < 0.001) {
// If both wheels moved about the same distance, then we get an infinite
// radius of curvature. This handles that case.
// find forward by rotating the axle between the wheels 90 degrees
double axlex = Rx - Lx;
double axley = Ry - Ly;
double forwardx, forwardy;
forwardx = -axley;
forwardy = axlex;
// normalize
double length = sqrt(forwardx*forwardx + forwardy*forwardy);
forwardx = forwardx / length;
forwardy = forwardy / length;
// move each wheel forward by the amount it moved
Lx = Lx + forwardx * l;
Ly = Ly + forwardy * l;
Rx = Rx + forwardx * r;
Ry = Ry + forwardy * r;
return;
}
double rl; // radius of curvature for left wheel
rl = WHEELBASE * l / (r - l);
printf("Radius of curvature (left wheel): %.2lf\n", rl);
double theta; // angle we moved around the circle, in radians
// theta = 2 * PI * (l / (2 * PI * rl)) simplifies to:
theta = l / rl;
printf("Theta: %.2lf radians\n", theta);
// Find the point P that we're circling
double Px, Py;
Px = Lx + rl*((Lx-Rx)/WHEELBASE);
Py = Ly + rl*((Ly-Ry)/WHEELBASE);
printf("Center of rotation: (%.2lf, %.2lf)\n", Px, Py);
// Translate everything to the origin
double Lx_translated = Lx - Px;
double Ly_translated = Ly - Py;
double Rx_translated = Rx - Px;
double Ry_translated = Ry - Py;
printf("Translated: (%.2lf,%.2lf) (%.2lf,%.2lf)\n",
Lx_translated, Ly_translated,
Rx_translated, Ry_translated);
// Rotate by theta
double cos_theta = cos(theta);
double sin_theta = sin(theta);
printf("cos(theta)=%.2lf sin(theta)=%.2lf\n", cos_theta, sin_theta);
double Lx_rotated = Lx_translated*cos_theta - Ly_translated*sin_theta;
double Ly_rotated = Lx_translated*sin_theta + Ly_translated*sin_theta;
double Rx_rotated = Rx_translated*cos_theta - Ry_translated*sin_theta;
double Ry_rotated = Rx_translated*sin_theta + Ry_translated*sin_theta;
printf("Rotated: (%.2lf,%.2lf) (%.2lf,%.2lf)\n",
Lx_rotated, Ly_rotated,
Rx_rotated, Ry_rotated);
// Translate back
Lx = Lx_rotated + Px;
Ly = Ly_rotated + Py;
Rx = Rx_rotated + Px;
Ry = Ry_rotated + Py;
}
main(int argc, char **argv) {
if (argc != 3) {
printf("Usage: %s left right\nwhere left and right are distances.\n",
argv[0]);
return 1;
}
double left = atof(argv[1]);
double right = atof(argv[2]);
printf("Old wheel positions: (%lf,%lf) (%lf,%lf)\n",
Lx, Ly, Rx, Ry);
update_wheel_position(left, right);
printf("New wheel positions: (%lf,%lf) (%lf,%lf)\n",
Lx, Ly, Rx, Ry);
} | http://credentiality2.blogspot.com/2010/06/going-from-odometry-to-position-in-two.html | CC-MAIN-2017-22 | refinedweb | 888 | 70.53 |
Pa. OpenShift is a Platform as a Service (PaaS) that “manages the stack so you can focus on your code.” It promised a way to for me to run all the new technologies my web host won’t allow. I had to try it out.
What I Want to Build
I make a lot of maps. Just simple point maps with a polygon here or there. No real geoprocessing or analysis. Just for displaying data. My application needed to display a map. My front-end of choice is Leaflet.js – a JavaScript library- and I have to be able to use it.
I am not a programmer but dabble in a few languages. Python is the one I am most comfortable with and I am more fluent in it than any other language. It is also a language I enjoy and having an interactive shell makes my coding go much faster. I can test solutions before they ever go in my program. If I am going to have a chance at success, I have to use Python.
All of my Python programming on the web has been using CherryPy. It is a minimal framework that was easy to install and to learn. I only need to do simple things like route and pass data so why go Django or Pylons – Pyramid now?
I really like MongoDB. You may think spatial is special and get all PostGIS on me, but for what I need and what I like to do, I am all about MongoDB right now. So I need to be able to install it.
To summarize, I need – or want:
- Leaflet.js
- Python
- CherryPy
- MongoDB
Getting Started
Go to OpenShift and setup an account.
I am not a reader of manuals – OpenShift has a good one – but more of an experimenter and skimmer. I used the OpenShift getting started and a blog post – REST web services with Python, MongoDB, and Spatial data in the Cloud – to get up and running. The blog used Flask, which I have never used before, but it does the same thing I was using CherryPy for, so I used it. In reading more about Flask, I found that it has a templating engine called Jinja2 available. When pushing my code to GIT, I saw that Jinja was already on OpenShift. I will admit, I am bad about templating and put all my HTML in a variable and have a function ‘return HTML.’ With this application, I wanted to break that bad habit.
After getting the Ruby Installer and Git, I followed all the instructions in the getting started and the blog and was left looking at a folder on my computer with some stuff inside of it. Now it’s time to play.
Coding the Application
The stuff in my folder is an application using Flask that I grabbed from GIT. The important part is in the WSGI folder. The MyFlaskApp file has code that grabs data from a MongoDB and dumps it out to a webpage. This is the file I will edit to make my app.
Before I do anything else, I want to write my webpage. It will be nothing more than a simple Leaflet map with one marker at a point retrieved from MongoDB. It will also be a Jinja Template.
Here is the code for my webpage.
<!doctype html>
<head><title>OpenShift: Leaflet, MongoDB, Python. Jinja2</title>
<link rel=’stylesheet’ href=’’ />
</head>
<div id=’map’ style=’width: 900px; height: 350px’></div>
<script src=’’></script>
<script>
var map = L.map(‘map’).setView([40.71367, -73.99364 ], 13);
L.tileLayer(‘http://{s}.tile.cloudmade.com/API-KEY/997/256/{z}/{x}/{y}.png’,{attribution:’Paul Crickard’, maxZoom: 18 }).addTo(map);
L.marker({{ coord["pos"] }}).addTo(map)
</script>
This file needs to be put in a folder in WSGI called TEMPLATES. You will notice another folder in the directory called STATIC. This is where you can put CSS and JS files.
The code is straight HTML and JavaScript with the exception of {{ coord[“pos’ }}. This is the Jinja part of the template. In the Flask Application, I will pass coord to the template.
Let’s take a look at a simple Flask application that will grab the point from MongoDB and send it to my Jinja Template. Here it is:
from flask import render_template
@app.route(“/ws/albuquerque”)
def albuquerque():
conn = pymongo.Connection(os.environ['OPENSHIFT_MONGODB_DB_URL'])
db = conn.parks
for z in db.location.find():
q=z
return render_template(‘map2.html’,coord=q)
The file I modified already has several import statements but you will need to add render_template to use Jinja. I changed the route to /albuquerque – I was going to use some local data but decided for a test I would use the parks JSON file so now the Albuquerque doesn’t make much sense. Then I grab a cursor and pass a point to q. Lastly, I call the render_template and hand it q as coord. Now in my HTML/Jinja Template, I can call coord. For the future, I would run a loop in the Template that grabbed all the points. A loop in Jinja looks like this:
{% for x in json %} Do something with X: L.marker({{ x }}).addTo(map) {% endfor %}
Now go to my RHCloud.com and see it in action. It seems rather simple – I need to add some popups – but there is some powerful technology behind this. MongoDB has some spatial features like $near, $box or $within $center of a circle. This should somewhat please the spatial is special crowd. One of the things I love is that I can have an object {“pos”:[35,-106],”name”:”ABQ”} and I can have another {“pos”:[33,-106],”name”:”LL”, “visitors”: 50}. In a table, every record has the same fields – I don’t have to do that in MongoDB. I see MongoDB as much more flexible for my needs.
Next Steps
What functionality would I like to add to this?
- A form that allows the user to select either all the data or a subset using simple radio buttons or combo boxes.
- A way to download a report of the data selected.
- A form for users to enter new data.
- Popups with additional data. This should only require adding
.bindPopup( {{coord["popupContent']}} )
One last thing: It runs on mobile. My HTML is not very good, but a mobile Leaflet app is simple. I have one that uses your phone’s current location on my website.
2 Responses to “OpenShift: Leaflet.js, MongoDB, and Flask” | http://paulcrickard.wordpress.com/2012/11/28/openshift-leaflet-js-mongodb-and-flask/ | CC-MAIN-2014-42 | refinedweb | 1,089 | 75.81 |
Add new rows and columns to Pandas dataframe
We often get into a situation where we want to add a new row or column to a dataframe after creating it. A quick and dirty solution which all of us have tried atleast once while working with pandas is re-creating the entire dataframe once again by adding that new row or column in the source i.e. csv, txt, DB etc. Pandas is a feature rich Data Analytics library and gives lot of features to achieve these simple tasks of add, delete and update. In this post we will see what are the different ways a Pandas user can add a new row or column to a dataframe.
Create a New Dataframe with Sales data from three different region. We have data from following region: West, North and South.
import pandas as pd df = pd.DataFrame({'Region':['West','North','South'], 'Company':['Costco','Walmart','Home Depot'], 'Product':['Dinner Set','Grocery','Gardening tools'], 'Month':['September','July','February'], 'Sales':[2500,3096,8795]}) df
Let’s see how to add the data for one region i.e. East in this dataframe. Lets add this region as a separate new row in the dataframe.
Add Row to Dataframe:
New Data Row for East Region:
This is a data dictionary with the values of one Region - East that we want to enter in the above dataframe. The data is basically a list with Dictionary having column as key and their corresponding values.
data = [{'Region':'East','Company':'Shop Rite','Product':'Fruits','Month':'December','Sales': 1265}]
We would be using Dataframe Append function to add this data for Region-East into the existing dataframe.
How does Pandas Append Function works?
It basically creates a new dataframe object with the new data row at the end of the dataframe. The old dataframe will be unchanged.
df.append(data,ignore_index=True,sort=False)
ignore_index will set the index label if set True, You can see the new row inserted having index value as 3. The default sorting is deprecated and will change to not-sorting in a future version of pandas.
If
sort set to True then columns will be sorted in alphabetical order and if column starts with numbers then those columns will be first followed by the columns that doesn’t start with numbers and sorted alpabetically.
df.append(data,ignore_index=True,sort=True)
Dataframe loc to Insert a row
loc is used to access a group of rows and columns by labels or a boolean array. However with an assignment(=) operator you can also set the value of a cell or insert a new row all together at the bottom of the dataframe
df.loc[3]=list(data[0].values()) or df.loc[len(df.index)]=list(data[0].values()) df
Dataframe iloc to update row at index position
We can replace a row with the new data as well using
iloc, which is integer-location based indexing for selection by position. In our original dataframe we want to replace the data for North Region with the new data of East Region. Update the row at index position 1 using
iloc and
list values of the data dictionary i.e.
list(data[0].values()) = ['East','Shop Rite','Fruits','December',1265]
Update row at Index position 1
df.iloc[1]=list(data[0].values()) df
So far we have seen how to add the new row at the bottom of the dataframe or replace an existing row with the new one. In the following section we will see how to add a new row in between two rows of a dataframe.
Insert row at specific Index Position
In our original dataframe we will add the new row for east region at position 2 i.e. insert this new row at second position and the existing row at index 1,2 will cut over to index 2,3
data = pd.DataFrame({'Region':'East','Company':'Shop Rite','Product':'Fruits','Month':'December','Sales': 1265}, index=[0.5]) df = df.append(data, ignore_index=False) df = df.sort_index().reset_index(drop=True) df
Dataframe append to add New Row
You can also add a new row as a dataframe and then append this new row to the existing dataframe at the bottom of the original dataframe. This is a quick solution when you want to do keep the new record separately in a different dataframe and after some point in time you need to merge that together.
b = pd.DataFrame.from_dict(data) b
Add this new dataframe of new row into our existing original dataframe(df)
df.append(b,sort=False)
Add New Column to Dataframe
Pandas allows to add a new column by initializing on the fly. For example: the list below is the purchase value of three different regions i.e. West, North and South. We want to add this new column to our existing dataframe above
purchase = [3000, 4000, 3500] df.assign(Purchase=purchase)
Add Multiple Column to Dataframe
Lets add these three list (Date, City, Purchase) as column to the existing dataframe using
assign with a dict of column names and values
Date = ['1/9/2017','2/6/2018','7/12/2018'] City = ['SFO', 'Chicago', 'Charlotte'] Purchase = [3000, 4000, 3500] df.assign(**{'City' : City, 'Date' : Date,'Purchase':Purchase})
| https://kanoki.org/2019/08/03/add-new-rows-and-columns-to-pandas-dataframe/ | CC-MAIN-2022-33 | refinedweb | 878 | 61.97 |
Errors and Exceptions
If you (and you will) write code that doesn’t work, you will get an error message.
What are exceptions?
Exceptions is what you get after you have first ran the program.
Different Errors
There are different kind of errors in Python, here are a few of them:
ValueError, TypeError, NameError, IOError, EOError, SyntaxError
This output show a NameError:
>>> print 10 * ten Traceback (most recent call last): File "", line 1, in NameError: name 'ten' is not defined and this output show it's a TypeError >>> print 1 + 'ten' Traceback (most recent call last): File "", line 1, in TypeError: unsupported operand type(s) for +: 'int' and 'str'
Try and Except
There is a way in Python that helps you to solve this : try and except
#Put the code that may be wrong in a try block, like this: try: fh = open("non_existing_file") #Put the code that should run if the code inside the try block fails, like this: except IOError: print "The file does not exist, exiting gracefully" #Putting it together, it will look like this: try: fh = open("non_existing_file") except IOError: print "The file does not exist, exiting gracefully" print "This line will always print"
Handling EOFErrors
import sys try: name = raw_input("what is your name?") except EOFError: print " You did an EOF... " sys.exit() If you do an ctrl+d, you will get an output like this: >>what is your name? >>You did an EOF...
Handling KeyboardInterrupts
try: name = raw_input("Enter your name: ") print "You entered: " + name except KeyboardInterrupt: print "You hit control-c" If you press ctrl+c, you will get an output like this: >>Enter your name: ^C >>You hit control-c
Handling ValueErrors
while True: try: x = int(raw_input("Please enter a number: ")) break except ValueError: print "Oops! That was no valid number. Try again..."
For a full list of all Python’s built-in exceptions, please see this post
Recommended Python Training
For Python training, our top recommendation is DataCamp. | https://www.pythonforbeginners.com/error-handling/how-to-handle-errors-and-exceptions-in-python | CC-MAIN-2021-31 | refinedweb | 328 | 56.73 |
ares_set_local_dev man page
ares_set_local_dev — Bind to a specific network device when creating sockets.
Synopsis
#include <ares.h> void ares_set_local_dev(ares_channel channel, const char* local_dev_name)
Description
The ares_set_local_dev function causes all future sockets to be bound to this device with SO_BINDTODEVICE. This forces communications to go over a certain interface, which can be useful on multi-homed machines. This option is only supported on Linux, and root privileges are required for the option to work. If SO_BINDTODEVICE is not supported or the setsocktop call fails (probably because of permissions), the error is silently ignored.
See Also
ares_set_local_ip4(3) ares_set_local_ip6(3)
Notes
This function was added in c-ares 1.7.4
Author
Ben Greear
Info
30 June 2010 | https://www.mankier.com/3/ares_set_local_dev | CC-MAIN-2018-39 | refinedweb | 117 | 57.67 |
The
fetchFile function is a wrapper around
fetch which provides support for path prefixes and some additional loading capabilities.
Use the
fetchFile function as follows:
import {fetchFile} from '@loaders.gl/core'; const response = await fetchFile(url); // Now use standard browser Response APIs // Note: headers are case-insensitive const contentLength = response.headers.get('content-length'); const mimeType = response.headers.get('content-type'); const arrayBuffer = await response.arrayBuffer();
The
Response object from
fetchFile is usually passed to
parse as follows:
import {fetchFile, parse} from '@loaders.gl/core'; import {OBJLoader} from '@loaders.gl/obj'; const data = await parse(fetchFile(url), OBJLoader);
Note that if you don't need the extra features in
fetchFile, you can just use the browsers built-in
fetch method.
import {parse} from '@loaders.gl/core'; import {OBJLoader} from '@loaders.gl/obj'; const data = await parse(fetch(url), OBJLoader);
A wrapper around the platform
fetch function with some additions:
setPathPrefix: If path prefix has been set, it will be appended if
urlis relative (e.g. does not start with a
/).
Fileand
Blobobjects on the browser (and returns "mock" fetch response objects).
Returns:
A promise that resolves into a fetch
Response object, with the following methods/fields:
headers:
Headers- A
Headersobject.
arrayBuffer(): Promise.ArrayBuffer
- Loads the file as anArrayBuffer`.
text(): Promise.String` - Loads the file and decodes it into text.
json(): Promise.String` - Loads the file and decodes it into JSON.
body: ReadableStream` - A stream that can be used to incrementally read the contents of the file.
Options:
Under Node.js, options include (see fs.createReadStream):
options.highWaterMark(Number) Default: 64K (64 * 1024) - Determines the "chunk size" of data read from the file.
This function only works on Node.js or using data URLs.
Reads the raw data from a file asynchronously.
Notes:
setPathPrefixwill be appended to relative urls.
fetchFilewill delegate to
fetchafter resolving the URL.
File/
Blobobjects a mock
Responseobject will be returned, and not all fields/members may be implemented.
Content-Lengthand
Content-Type
headersare also populated for non-request data sources including
File,
Bloband Node.js files.
fetchFileis intended to be a small (in terms of bundle size) function to help applications work with files in a portable way. The
Responseobject returned on Node.js does not implement all the functionality the browser does. If you run into the need
readFileand
readFileAsyncfunctions with other loaders.gl functions is entirely optional. loader objects can be used with data loaded via any mechanism the application prefers, e.g. directly using
fetch,
XMLHttpRequestetc. | https://loaders.gl/docs/api-reference/core/fetch-file/ | CC-MAIN-2019-35 | refinedweb | 412 | 52.36 |
Flashcard script
Find here a script for creating and viewing a set of flashcards using ui. As soon as I figure out how (help, help), I'll upload the views. Thanks to omz for an incredible programming environment for iOS.
- SpotlightKid
You can open the
.pyuifiles in the Pythonista editor as text files from the console:
import editor editor.open_file('foo.pyui')
From there you can use the action menu to export it. Be sure that the
.pyuifile isn't already open in the view editor, it won't open in the text editor in this case.
Or you can directly open it in another app (e.g. save it to Dropbox) via the
console.open_in()function:
import console console.open_in('foo.pyui')
For your first suggestion, the export action is not present in this situation.
I would highly recommend that you use PackUI to create a single file that contains both the .py and .pyui files into a single file that you can then store on GitHub. PackUI is discussed here.
Used Git commit to get the files up. Will try PackUI at some point. See my question regard multiple pyui files.
- SpotlightKid
For your first suggestion, the export action is not present in this situation.
You're right. Should have checked ;) But the custom actions are present, as you have noticed.
Please note that I tried to build a zip file with some pictures in the desired hierarchy. Too big. I will put up a "test builder" to create a test set real soon now. Its easy enough using shellista and wget to roll your own. Just create 'ASL' folder in you root directory, and create subfolders in there with appropriate names and populate these with picture files. You can change the "base" folder to whatever/wherever you like. Also note new repository. It includes my original hydrogen based gui version.
There is now a script, make_asl_dir.py to build a simple test "book" for demo purposes.
When I download and try to run these files into the directory
<documents path>/UI/Flashcards_UI/then things do not work as expected because
make_asl_dir.pycreates
<documents path>/UI/Flashcards_UI/ASLbut
Flashcard_UI.pyis looking for
<documents path>/ASL.
Putting
root_doc_dir = os.path.dirname(__file__)on line 253 of
Flashcard_UI.pymakes the both scripts use the same
<documents path>/UI/Flashcards_UI/ASL.
I have posted a new version of the flashcard program. <br><br>o - the chapter chooser data source is now defined by a class and allows dynamic modification of individual cells. <br>o - when a flashcard is displayed, its chapter is highlighted in red. <br>o - the ASL directory (the main "book") resides in the same folder as the script.<br><br>There are two additional scripts. One to build a test "book" and one to download a book from a suitable webserver and create the "book" in the current folder. | https://forum.omz-software.com/topic/1008/flashcard-script | CC-MAIN-2018-13 | refinedweb | 481 | 68.97 |
XNA to SilverXNA–Part 2 Getting your XNA project running in Silverlight
XNA to SilverXNA–Part 2 Getting your XNA project running in Silverlight
Join the DZone community and get the full member experience.Join For Free
In Part 1 I went over some of the things you’ll need to do to get your current XNA project ready for use in the Silverlight XNA integration which I've dubbed as SilverXNA for this tutorial series.
Today I'm going to walk you through this by migrating the Platformer Sample from the AppHub Educational content into a new SilverXNA project.
If you want to read more about what happens under the covers between the Silverlight and XNA frameworks you can read one of my previous posts here, mostly technical stuff and a Silverlight primer fro XNA devs.
I’m endeavouring to keep this tutorial open to all levels of dev’s, so if some of the instructions are a bit basic for you, just skim read them as needed (just pay attention
)
Full source for the completed project can be found here on codeplex
The main focus of this chapter is to just get us running and the problems I've faced in getting this to just run, nothing fancy just simple baby steps to show off the impact of the changes were going to make later.
Now if you are converting your own project along side me with this tutorial, make sure you have read through Part one and got a heads up of the main impacts to your project.
Follow along with the series here:
Part 1 – an OverviewPart 1 – an Overview
Part 2 – Getting Started (here)Part 2 – Getting Started (here)!
Lets get the new project started
So keeping it simple we’ll create a new SilverXNA project, thankfully in the latest Beta 2 phase of the tools they have fixed one of my pet peeves where the Silverlight version of the “Windows Phone Rich Graphics Application” and the XNA “Windows Phone Silverlight and XNA application” which was that the two projects were completely different and in fact one of them didn’t work out of the box. (you can read more about this here, in fact the Rich Graphics app used to be the XNA project and the Silverlight one was called a “Windows Phone 3D graphics application”
), so that’s sorted now but if you compare the two projects there are subtle differences, why they are not the same project for both with the same name is beyond anyone’s guess.
So now it doesn’t matter which one you pick as they are both effectively the same so just pick the one nearest to you (I used the XNA version when prepping for this tutorial series and am now using the Silverlight one for the sample project
)
Once you have got it setup you should see the new SilverXNA solution with it’s three projects, A Silverlight C# project(if you chose C# that is, if you are running VB then it’ll obviously be VB, but this tutorial is written for C# so you’ll just have to follow along and convert in your head, the same tricks will work through), an XNA game library (the bridge between Silverlight and XNA) and the XNA Content Project.
If you run this now you’ll get the two new starter screens:
Nothing spectacular but it does give us a chance to see a good old clean Cornflower Blue page again
.
Brining in the Rain
So with our new project setup, first thing we need to do is bring in our XNA game project with a twist. make sure you have downloaded the Platformer sample from the AppHub first and have it unpacked somewhere.
First remove the “Content” project as we are going to be using the one from our XNA game project, next right click on the Solution and select “Add –> New Project” then select from the XNA branch of the New Project wizard the “Windows Phone Game Library (4.0)” project, name it something appropriate as this is were we are going to copy the Platformer code to (I used PlatformerGameLibrary).
Next Right-Click on the new PlatformerGameLibrary project and select “Add –> Existing Item” which will pop up the File Browse wizard and navigate to the folder where the Platformer sample game code is located, this is key especially if you want to maintain a multi-platform project where we want to share code.
Now select all the “.CS” files with the exception of the “Program.cs” and “Platformergame.cs”. We need to manage in the code from the game.cs file so it fit’s properly with the new SilverXNA project plus we don’t want it interfering with the build, as for the Program.CS file Windows Phone doesn’t even use the
If you just want a separate SilverXNA project in which case just copy the code directly into your new library or even the existing library that came with the solution and skip the previous step.
Next Right click the Solution and select “Add –> Existing Project” and browse to the location of the Platformer sample Content folder, then select the content project there. You should now end up with the following:
And now comes the Breaking
At this point we have pure XNA code in a Silverlight project, so not only will it not run it won’t even compile….
Now if you are doing your own project you should have already done the prerequisites to your project from the instructions in Part 1 but I'm going to re-iterate through them here for the Platformer sample.
References
First off tidy up the references and add the ones we require for each project:
- Reference the Content Project from the SilverXNA (main project link library) project
- Reference the PlatformerGameLibrary project from the Main Silverlight Project
- Add a reference to “Microsoft.Phone.Sensors” to the PlatformerGameLibrary project (as we are using the accelerometer)
- Add a reference to “Microsoft.Phone” to the PlatformerGameLibrary project (as we are using the some native API’s)
Change the scope of the base objects in the PlatformerGameLibrary
Edit the following files and simply make the classes and enumerations within them “Public” so they will be exposed outside the game library
- AnimationPlayer.cs
- Circle.cs
- Enemy.cs
- Gem.cs
- Level.cs
- Player.cs
- Tile.cs
For example, change the following:
From:
/// /// Facing direction along the X axis. /// enum FaceDirection { Left = -1, Right = 1, } /// /// A monster who is impeding the progress of our fearless adventurer. /// class Enemy { public Level Level { get { return level; } } Level level;
To:
/// /// Facing direction along the X axis. /// public enum FaceDirection { Left = -1, Right = 1, } /// /// A monster who is impeding the progress of our fearless adventurer. /// public class Enemy { public Level Level { get { return level; } } Level level;
Replace GameTime references to just use the TimeSpan ElapsedGameTime variable in the function parameters
In the Update and Draw functions of the above classes replace “GameTime gametime” with “TimeSpan elapsedGameTime” where applicable
From:
public void Update(GameTime gameTime) {
To:
public void Update(TimeSpan elapsedGameTime) {
Fix code that originally use the GameTime variable
Again in the above classes update and draw functions, remove references that use the gameTime variable as it was passed to the function so that it now uses the elapsedGameTime variable
From:
float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;
To:
float elapsed = (float)elapsedGameTime.TotalSeconds;
Final tidy ups
Clean up the remaining broken references in Player.cs and Level.cs by replacing any mentions of “gameTime” with “elapsedGameTime”. There are a few dotted around including some internal functions such as “DoJump” and “gem.Update(gameTime)”
There’s always one exception
The only reference I was unable to fix was in “Gem.cs” where the project actualy makes use of the “TotalGameTime” property of the original GameTime class. As it’s only one reference I decided to overlook this and just replaced it with “elapsedGameTime” just to keep things simple and it doesn’t overly affect the end result. If it were really important or if it was my own project I may have looked to refactor this a bit better or as stated before passed both elapsed and total time to the function that needed it.
At this point your project should compile with no errors, granted if you run it you will just have an empty blue screen but we know the game code compiles fine.
Final cut
So with our project in a state where it will work with SilverXNA its time to get this show on the road, now we just need to copy all the relevant bits from the original PlatformerGame..cs and get them working in our new project.
Constructor and Variables
First we need to sort out all the primary variables our game uses and the initialisation logic, so open up “GamePage.XAML.cs” and add the following variables to the top of the GamePage class just below the content manager, Timer and spritebatch variables (fixing any broken references as you go):
// Global content. private SpriteFont hudFont; private Texture2D winOverlay; private Texture2D loseOverlay; private Texture2D diedOverlay; // Meta-level game state. private int levelIndex = -1; private Level level; private bool wasContinuePressed; // When the time remaining is less than the warning time, it blinks on the hud private static readonly TimeSpan WarningTime = TimeSpan.FromSeconds(30); // We store our input states so that we only poll once per frame, // then we use the same input state wherever needed private GamePadState gamePadState; private KeyboardState keyboardState; private TouchCollection touchState; private AccelerometerState accelerometerState; // The number of levels in the Levels directory of our content. We assume that // levels in our content are 0-based and that all numbers under this constant // have a level file present. This allows us to not need to check for the file // or handle exceptions, both of which can add unnecessary time to level loading. private const int numberOfLevels = 3;
And add the following to the end of the GamePage constructor:
Accelerometer.Initialize();
A few notes about the above before we continue, I’ve kept in the references to the Keyboard and Gamepad states, mainly to keep uniformity down the line in the main game project, you can remove them if you with but at present they are doing no harm as they are effectively not being used anyway, its all just about cross platform support.
As for the Accelerometer, I’ve had to tweak it slightly, as we only want the accelerometer on when the GamePage is running and stop it when it’s not but the current AccelerometerHelper is not designed that way, so to avoid the game crashing when you run it twice I have replaced the error exception when it is started twice with a return statement, fix this if you wish or just check out the debugger when it breaks. (edit Accelerometer.cs and replace the entire line of “Throw InvalidOperationException” with a simple “Return” statement)
Navigation and Content Loading
Now unlike native XNA where the Content Loading process is handled by the Main Game framework in SilverXNA we have to do it manually, now this can be both a blessing and a pain just because of the many ways and events available to us when a page is navigated to and from. the Navigated to event is fired just after the page has been constructed in memory but before it is presented to the screen so it is a handy place to load content that needs to be drawn to the screen.
With all XNA games knowing what content to load and when is very important so only load what you immediately need and delay loading anything else till later (like in the onLoaded event which is when the page has finished presenting to the screen or better yet offload it to another thread when the page has finished loading else you may lock up screen drawing if it takes a long time)
In this case we don’t have that much to load so we can do it all at once, just add the following in the “onNaviagatedTo” function just before the “Timer.Start()” call:
LoadContent();
and then add the respective function just after the “OnNavigatedTo” function:
/// /// LoadContent will be called once per game and is the place to load /// all of your content. /// void LoadContent() { // Load fonts hudFont = contentManager.Load("Fonts/Hud"); // Load overlay textures winOverlay = contentManager.Load("Overlays/you_win"); loseOverlay = contentManager.Load("Overlays/you_lose"); diedOverlay = contentManager.Load("Overlays/you_died"); //Known issue that you get exceptions if you use Media PLayer while connected to your PC //See //Which means its impossible to test this from VS. //So we have to catch the exception and throw it away try { MediaPlayer.IsRepeating = true; MediaPlayer.Play(contentManager.Load("Sounds/Music")); } catch { } LoadNextLevel(); }
In this I’ve fixed up for you the Content Manager references to use the SilverXNA one over the use of the Game class “Content” references, in SilverXNA the Content Manager is exposed as a service unlike XNA where is it just provided out of the box.
Don't forget to fix the references to the MediaPlayer, ignore the missing “LoadNextLevel” function for now as we will get to that later.
Pages should really unload their content if the assets are not going to be used any more so keep this in mind and handle the unloading of any unneeded content in the “onNavigatedFrom” function. As we are constantly using the same assets in this game we need not bother.
Game Update function
Next up is the update function (if for no other reason that Update is called before Draw), thankfully in the project template has already wired up the Timer and associated events for Update and Draw so just add the following to the onUpdate function in the GamePage.XAML.cs file:
// Handle polling for our input and handling high-level input HandleInput(); // update our level, passing down the GameTime along with all of our input states level.Update(e.ElapsedTime, keyboardState, gamePadState, touchState, accelerometerState, this.Orientation.ToXNAOrientation());
Not much to talk about here just basic game logic stuff, again ignore the red squiggles as we will come back to it later. Note the reference to “e.ElapsedTime”, this comes from the “GameTimerEventArgs” we talked about recently which is the replacement for the XNA GameTime class, here is where you would need to also grab the TotalGameTime if needed.
Game Draw Function
As with the Update function, the Draw function is just admin stuff really and I've tidied up references to the Spritebatch as they were slightly different, so just add the following under the comment in the “onDraw” function:
spriteBatch.Begin(); level.Draw(e.ElapsedTime, spriteBatch); DrawHud(); spriteBatch.End();
You know the drill broken bits to be fixed shortly
The last bits, the supporting actors
And finally (well almost) here’s the rest of the supporting code and functions from the original Platformer Game sample, I’ve tweaked and prodded where needed just to line up use of GameTime and such but I’ve not had to change much, granted a lot of this could be handled from within the game library, for others however this actually helps us because it’s stuff I’ going to rip out and replace with Silverlight:
SO just paste the following after the “onDraw” function:
view source print? 001 private void DrawHud() 002 { 003 Microsoft.Xna.Framework.Rectangle titleSafeArea = 004 SharedGraphicsDeviceManager.Current.GraphicsDevice.Viewport.TitleSafeArea; 005 Vector2 hudLocation = new Vector2(titleSafeArea.X, titleSafeArea.Y); 006 Vector2 center = new Vector2(titleSafeArea.X + titleSafeArea.Width / 2.0f, 007 titleSafeArea.Y + titleSafeArea.Height / 2.0f); 008 009 // Draw time remaining. Uses modulo division to cause blinking when the 010 // player is running out of time. 011 string timeString = "TIME: " + level.TimeRemaining.Minutes.ToString("00") + 012 ":" + level.TimeRemaining.Seconds.ToString("00"); 013 Color timeColor; 014 if (level.TimeRemaining > WarningTime || 015 level.ReachedExit || 016 (int)level.TimeRemaining.TotalSeconds % 2 == 0) 017 { 018 timeColor = Color.Yellow; 019 } 020 else 021 { 022 timeColor = Color.Red; 023 } 024 DrawShadowedString(hudFont, timeString, hudLocation, timeColor); 025 026 // Draw score 027 float timeHeight = hudFont.MeasureString(timeString).Y; 028 DrawShadowedString(hudFont, "SCORE: " + level.Score.ToString(), hudLocation + new Vector2(0.0f, timeHeight * 1.2f), 029 Color.Yellow); 030 031 // Determine the status overlay message to show. 032 Texture2D status = null; 033 if (level.TimeRemaining == TimeSpan.Zero) 034 { 035 if (level.ReachedExit) 036 { 037 status = winOverlay; 038 } 039 else 040 { 041 status = loseOverlay; 042 } 043 } 044 else if (!level.Player.IsAlive) 045 { 046 status = diedOverlay; 047 } 048 049 if (status != null) 050 { 051 // Draw status message. 052 Vector2 statusSize = new Vector2(status.Width, status.Height); 053 spriteBatch.Draw(status, center - statusSize / 2, Color.White); 054 } 055 } 056 057 private void DrawShadowedString(SpriteFont font, string value, Vector2 position, Color color) 058 { 059 spriteBatch.DrawString(font, value, position + new Vector2(1.0f, 1.0f), Color.Black); 060 spriteBatch.DrawString(font, value, position, color); 061 } 062 063 private void HandleInput() 064 { 065 touchState = TouchPanel.GetState(); 066 accelerometerState = Accelerometer.GetState(); 067 068 069 bool continuePressed = touchState.AnyTouch(); 070 071 // Perform the appropriate action to advance the game and 072 // to get the player back to playing. 073 if (!wasContinuePressed && continuePressed) 074 { 075 if (!level.Player.IsAlive) 076 { 077 level.StartNewLife(); 078 } 079 else if (level.TimeRemaining == TimeSpan.Zero) 080 { 081 if (level.ReachedExit) 082 LoadNextLevel(); 083 else 084 ReloadCurrentLevel(); 085 } 086 } 087 088 wasContinuePressed = continuePressed; 089 } 090 091 private void LoadNextLevel() 092 { 093 // move to the next level 094 levelIndex = (levelIndex + 1) % numberOfLevels; 095 096 // Unloads the content for the current level before loading the next one. 097 if (level != null) 098 level.Dispose(); 099 100 // Load the level. 101 string levelPath = string.Format("Content/Levels/{0}.txt", levelIndex); 102 using (Stream fileStream = TitleContainer.OpenStream(levelPath)) 103 level = new Level((Application.Current as App).Services, fileStream, levelIndex); 104 } 105 106 private void ReloadCurrentLevel() 107 { 108 --levelIndex; 109 LoadNextLevel(); 110 }
There are a few comments I should make about this block however, most was just the simple fixes that I’ve stated many times before here, other took a little head scratching namely where the Graphics Device was involved.
Because SilverXNA uses a “SharedGraphicsDeviceManager” over the traditional XNA “GraphicsDevice” you need to be a bit more specific in what you are referring to so note that I have replaced the original line for accessing the “Tile Safe Area” from:
To:
So it’s just a point of reference to note the difference but something definitely to keep in mind, there are also TWO rectangle classes, so in instances like this you have to be very specific, cannot rule out one or the other because both might be needed.
I did also remove all of the code for keyboard and gamepad updates as they were no longer needed in the “HandleInput” function.
The other thing of note is the section at the end for loading the next level, I had to change the reference to he Game “Services” repository (part of the Game class again) to the reference held in the App.XAML.CS class, like many things with the reduced / modified framework you just need to know where to look for stuff and if it is held in the Application class for the Silverlight application then you just need to refer to it in this manner thus:
Lastly don’t forget your references, those Streams are not going to find themselves.
Last Man Standing
You know when I said there is only one exception, well I wasn’t being completely honest there, you will still have one broken link if you compile the above for an extension method I had to create to handle the Orientation enumeration differences between XNA and Silverlight (and I did warn you in Part 1
)
Just add the following to the very end of the “GamePage.XAML.cs” file after the GamePage class, it’s a simple enough Extension method that will return an XNA orientation from a defined Silverlight Orientation, this can then be fed to whatever XNA code in your project is handling drawing / input for orientation, as stated before this is to make it easer on multi-platform XNA projects so you don’t need a load of #IF statements to balance everything:
01 public static class OrientationExtensions 02 { 03 public static DisplayOrientation ToXNAOrientation(this PageOrientation input) 04 { 05 switch (input) 06 { 07 case PageOrientation.Landscape: 08 case PageOrientation.LandscapeLeft: 09 return DisplayOrientation.LandscapeLeft; 10 case PageOrientation.LandscapeRight: 11 return DisplayOrientation.LandscapeRight; 12 default: 13 return DisplayOrientation.Portrait; 14 } 15 } 16 }
I could have put this into it’s own class as your supposed to but it was the end of a very long day and it made ore sense to keep all the changes and additional stuff to a minimum.
Mind the water jump at the end of the course
Now you would be forgiven if you thought we were done and sure enough the game will compile and run with only one minor little bugg’et, as shown below:
Now one reason for this is simple, by default XNA will start games in Landscape, in Silverlight the default is Portrait, simples
. So we just need to tell the GamePage that we would like it in Landscape Pretty please (note that every page needs to be set to Landscape or Portrait unlike XNA where it is set once on start-up and only changes if you tell it to or if the user rotates the device if supported)
So for the first time here, edit the GamePage.XAML and change the following line from:
To
Not so bad for your first bit of XAML editing, you can do this from code of course but with Silverlight it’s better to set the default in XAML and only override if needed.
A key thing to note though is that unlike XNA where we only have one screen and have to use game states to manage different setup’s of the game world whereas in Silverlight we can have many pages all of which can be either completely Silverlight, completely XNA or a mixture of both which gives us greater flexibility and avoid complex screen management scenarios.
The best example I have seen of this was the car setup for a racing game, each different view that you had to select tracks, setup the car, select add-on’s and customisations was a separate page each with it’s own logic or input strategy before launching the main racing part of the game, if done in XNA you had to manage many states and write complex routines to handle all the variations or start making compromises to avoid breaking points.
End of the line
That’s a wrap people, print it and go and get more coffee.
As stated before the full source for the completed project can be found here on codeplex
So now we have our game running in SilverXNA, the series will now return to our regular broadcasting schedule and focus on the little things, the big advantages of using SilverXNA and the simplicity it brings for our games.
Light’s out please
Source:
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/xna-silverxna%E2%80%93part-2-getting | CC-MAIN-2018-39 | refinedweb | 3,880 | 54.76 |
First question here on superuser.
I sometimes get ad-hoc bug reports from customers, which I need to transfer to our online bug tracker. Works fine for text, but pictures are tedious.
I'm looking for a solution to copy-paste images from documents (like excel sheets) in a way that if you paste an image to a file input (or text input) on a html page, the file will automatically be written to disk (tmp dir), and the path written to the file input field.
This question is related to Directly paste clipboard image into gmail message, but I would like to ask if there is a solution using a local program only. I'm interested in solutions for all OS's.
Okay guys, This is what I did.
using System;
using System.Collections.Generic;
using System.Text;
using System.Windows.Forms;
using System.Drawing;
namespace ClipSave
{
class Program
{
[STAThread] public static void Main()
{
if (Clipboard.GetDataObject() != null)
{
IDataObject data = Clipboard.GetDataObject();
if (data.GetDataPresent(DataFormats.Bitmap))
{
Image image = (Image)data.GetData(DataFormats.Bitmap, true);
string file = System.Windows.Forms.Application.CommonAppDataPath + "\\ClipSaveImage.png";
image.Save(file, System.Drawing.Imaging.ImageFormat.Png);
Clipboard.SetText(file);
}
else
MessageBox.Show("Copy valid image first");
}
else
MessageBox.Show("Copy image first");
}
}
}
Compiled it to an EXE, added a start-up menu shortcut to it with a hotkey Ctrl+Shift+C. It then copies the current image in clipboard to a file and puts path to the file into the clipboard.
This AutoHotKey thread has an AutoHotKey script for taking a screenshot. It includes the source for a trivial .Net 1.1 program to save the clipboard to a PNG. You would want it to have a few other modifications:
image.png
The AutoHotKey Clipboard commands are another way to access that data, though a program to get its image data would be a bit more complicated.
Not sure if i answering your question. but you can try out clipman. it helps you to copy everything and keep it aside until you select it back one by one.
Clipman,source CNET
These three methods will let you save a file pretty quick. The File Explorer will always go to the same folder so quickly saving or uploading the file won't be a problem either...
The direct clipboard feature has been asked before and failed.
I am not familiar with Windows, but since you asked for solutions for all OSes, I have an applescript solution for Mac OS X that I have tested by copying a picture on this website and executing the script.
This applescript assumes the image is on the clipboard in TIFF format (might have to test to see if this is what comes out of Excel.) It creates the file from the clipboard, saves it to a temporary directory, then pastes the path into a specified field on the frontmost page in Safari.
So, you would copy the image, switch to your safari page, and run the script. (From the script menu, make it into a service and assign a shortcut, or use FastScripts to assign a shortcut to the applescript.)
The script will have to be adjusted to find the proper field on your form.
repeat with i in clipboard info
if TIFF picture is in i then
-- grab the picture from the clipboard, set up a filename based on date
set tp to the clipboard as TIFF picture
set dt to current date
set dtstr to (time of dt as string) & ".tiff"
set pt to ((path to temporary items from user domain as string) & dtstr)
set tf to open for access file pt with write permission
-- save the file
try
write tp to tf
close access tf
on error
close access tf
end try
-- put the path into the proper field in the web Browser
tell application "Safari"
activate
-- adjust javascript as necessary
-- currently inserts into Answer textarea of this superuser.com page for testing
-- ie. make sure you've clicked "add answer" first
set myJS to "document.getElementById('wmd-input').value = '" & pt & "'"
-- document 1 is frontmost
do JavaScript myJS in document 1
end tell
exit repeat
end if
end repeat
Edit: Things to consider:
copy picture
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
1545 times
active | http://superuser.com/questions/137116/clipboard-copy-image-get-path-back-when-pasting-in-file-input?answertab=votes | CC-MAIN-2014-10 | refinedweb | 722 | 64.1 |
This site uses strictly necessary cookies. More Information
My level editor saves level layouts in .txt files. During loading screens, it then accesses them from a folder in the project. I really want my game to access them from a scriptable object however, to make loading easier on my fragile mind.
However, I've managed to find no information on this topic. It's almost like no one has ever asked this question before. I'm not even sure if it can be done.
If it can be done, it would be very nice if someone could help me out.
if it can't, it'd also be nice if someone could direct me in the way of a better solution.
Why do you want to use a scriptable object i dont see the point of it?
The main reason why is because I'm loading level data into an empty scene, actually constructing it at run time. Having a simple way of slipping level info in, without having to worry about file names or the like seems like a good idea. The other reason is because I'm also storing other info in it, such as the location of certain objects in that scene, starting locations and what music is supposed to play. Having all of that data stored in one container also seems like a good idea.
I fear I cant help you with that. Sorry...
Answer by Owen-Reynolds
·
Jan 19, 2020 at 06:24 PM
public TextAsset dessert1Level; will allow you to drag in a reference to a text file. Or public TextAsset[] levelFiles; for a list, in the usual way. I'm not sure if the file still needs to be in Resources.
public TextAsset dessert1Level;
public TextAsset[] levelFiles;
That might be nice for special levels. For "regular" ones, like dungeon3Level4.txt, most people would probably look it up -- getLevel(int dungeonNum, int levelNum) would compute the name of the file.
Answer by sacredgeometry
·
Jan 18, 2020 at 06:26 PM
Just understanding what serialisable objects are and how they are used will answer your question. Try reading the documentation.
Answer by JDelekto
·
Jan 19, 2020 at 05:37 PM
Hi @Meowium99 ,
One way to do this is to create a relatively simple ScriptableObject based class, add a string property to hold the text data (I assume these are not extremely huge files), then add attributes to mark it as serializable as well as a text area (just for ease of use in the editor). The scriptable object would look something like this:
using UnityEngine;
[CreateAssetMenu(menuName = "Level Text File")]
public class LevelTextFile : ScriptableObject
{
[SerializeField, TextArea]
public string text;
}
To create a new "LevelTextFile", go to the folder you want to add it to in your project, right-click to bring up the context menu, use 'Create' and then select "Level Text File". You can create one of these for each of your level text files. Select the object, then paste the contents of your text files into the "text" property for the scriptable object.
Note: while I just stored the object as a plain string, if you still want to keep the text files as assets, but use them from scriptable objects, use "TextAsset" for the "text" property instead of "string". You would then click the dot in the editor for the object and select a text file asset from your project instead. It's all up to you.
Now, you can create an empty GameObject which will be your holder for all (or some) of your scripts in a scene. You can then attach a simple script to the game object, such as:
using UnityEngine;
public class TestScript : MonoBehaviour
{
public LevelTextFile level1;
void Start()
{
if (level1 != null)
{
Debug.Log(level1.textFile);
}
}
}
Of course, you can more properties or an array of these LevelTextFile objects, season to.
Unity recompilation time slowly increases each time it's recompiled. Why?
0
Answers
Make a folder that acts same as streamingassets but can use and edit .asset types.
0
Answers
How come my scriptable object only receives new information when it's selected in the inspector?
0
Answers
Do i need to create a "current value" for those private variables?
0
Answers
Why does my scriptable object asset lose items in a list after first playthrough?
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/1692145/storing-a-txt-file-in-a-scriptable-object.html?sort=oldest | CC-MAIN-2021-39 | refinedweb | 724 | 70.13 |
Java 5 (JDK 1.5) introduced the concept of Generics or parameterized types. In this article, I introduce the concepts of Generics and show you examples of how to use it. In Part II, we will look at how Generics are actually implemented in Java and some issues with the use of Generics.
Java is a strongly typed language. When programming with Java, at compile time, you expect to know if you pass a wrong type of parameter to a method. For instance, if you define:
Dog aDog = aBookReference; // ERROR
where, aBookReference is a reference of type Book, which is not related to Dog, you would get a compilation error.
aBookReference
Book
Dog
Unfortunately though, when Java was introduced, this was not carried through fully into the Collections library. So, for instance, you can write:
Vector vec = new Vector();
vec.add("hello");
vec.add(new Dog());
…
There is no control on what type of object you place into the Vector. Consider the following example:
Vector
package com.agiledeveloper;
import java.util.ArrayList;
import java.util.Iterator;
public class Test
{
public static void main(String[] args)
{
ArrayList list = new ArrayList();
populateNumbers(list);
int total = 0;
Iterator iter = list.iterator();
while(iter.hasNext())
{
total += ((Integer) (iter.next())).intValue();
}
System.out.println(total);
}
private static void populateNumbers(ArrayList list)
{
list.add(new Integer(1));
list.add(new Integer(2));
}
}
In the above program, I create an ArrayList, populate it with some Integer values, and then total the values by extracting the Integers out of the ArrayList.
ArrayList
Integer
The output from the above program is a value of 3, as you would expect.
Now, what if I change the populateNumbers() method as follows:
populateNumbers()
private static void populateNumbers(ArrayList list)
{
list.add(new Integer(1));
list.add(new Integer(2));
list.add("hello");
}
I will not get any compilation errors. However, the program will not execute correctly. We will get the following runtime error:
Exception in thread "main" java.lang.ClassCastException:
java.lang.String at com.agiledeveloper.Test.main(Test.java:17)…
We did not quite have this type-safety with collections pre-Java-5.
Back in the good old days when I used to program in C++, I enjoyed using a cool feature in C++ – templates. Templates give you type-safety while allowing you to write code that is general, that is, it is not specific to any particular type. While C++ templates is a very powerful concept, there are a few disadvantages with it. First, not all compilers support it well. Second, it is fairly complex that it takes quite an effort to get good at using it. Lastly, there are a number of idiosyncrasies in how you can use it that it starts hurting the head when you get fancy with it (this can be said generally about C++, but that is another story). When Java came out, most features in C++ that were complex, like templates and operator overloading, were avoided.
In Java 5, finally, it was decided to introduce Generics. Though Generics – the ability to write general or generic code which is independent of a particular type – is similar to templates in C++ in concept, there are a number of differences. For one, unlike C++ where different classes are generated for each parameterized type, in Java, there is only one class for each generic type, irrespective of how many different types you instantiated it with. There are, of course, certain problems as well in Java Generics, but that we will talk about in Part II. In this part I, we will focus on the good things.
The work of Generics in Java originated from a project called GJ1 (Generic Java) which started out as a language extension. This idea then went through the Java Community Process (JCP) as a Java Specification Request (JSR) 142.
Let’s start with a non-generic example we looked at to see how we can benefit from Generics. Let’s convert the code above to use Generics. The modified code is shown below:
package com.agiledeveloper;
import java.util.ArrayList;
import java.util.Iterator;
public class Test
{
public static void main(String[] args)
{
ArrayList<Integer> list = new ArrayList<Integer>();
populateNumbers(list);
int total = 0;
for(Integer val : list)
{
total = total + val;
}
System.out.println(total);
}
private static void populateNumbers(ArrayList<Integer> list)
{
list.add(new Integer(1));
list.add(new Integer(2));
list.add("hello");
}
}
I am using ArrayList<Integer> instead of the ArrayList. Now, if I compile the code, I get a compilation error:
ArrayList<Integer>
Test.java:26: cannot find symbol
symbol : method add(java.lang.String)
location: class java.util.ArrayList<java.lang.Integer>
list.add("hello");
^
1 error
The parameterized type of ArrayList provides the type-safety. “Making Java easier to type and easier to type,” was the slogan of the Generics contributors in Java.
In order to avoid confusion between the generic parameters and the real types in your code, you must follow a good naming convention. If you are following good Java conventions and software development practices, you would probably not be naming your classes with single letters. You would also be using mixed case with class names, starting with upper case. Here are some conventions to use for Generics:
public class PriorityQueue<E> {…}
The syntax for writing a generic class is pretty simple. Here is an example of a generic class:
package com.agiledeveloper;
public class Pair<E>
{
private E obj1;
private E obj2;
public Pair(E element1, E element2)
{
obj1 = element1;
obj2 = element2;
}
public E getFirstObject() { return obj1; }
public E getSecondObject() { return obj2; }
}
This class represents a pair of values of some generic type E. Let’s look at some examples of usage of this class:
E
// Good usage
Pair<Double> aPair = new Pair<Double>(new Double(1), new Double(2.2));
If we try to create an object with types that mismatch, we will get a compilation error. For instance, consider the following example:
// Wrong usage
Pair<Double> anotherPair = new Pair<Double>(new Integer(1), new Double(2.2));
Here, I am trying to send an instance of Integer and an instance of Double to an instance of Pair. However, this will result in a compilation error.
Double
Pair
Generics honors the Liskov’s Substitutability Principle4. Let me explain that with an example. Say, I have a Basket of Fruits. To it I can add Oranges, Bananas, Grapes, etc. Now, let’s create a Basket of Banana. To this, I should only be able to add Bananas. It should disallow adding other types of fruits. Banana is a Fruit, i.e., Banana inherits from Fruit. Should Basket of Banana inherit from Basket for Fruits, as shown in the figure below?
If Basket of Banana were to inherit from Basket of Fruit, then you may get a reference of type Basket of Fruit to refer to an instance of Basket of Banana. Then, using this reference, you may add a Banana to the basket, but you may also add an Orange. While adding a Banana to a Basket of Banana is OK, adding an Orange is not. At best, this will result in a runtime exception. However, the code that uses Basket of Fruits may not know how to handle this. The Basket of Banana is not substitutable where a Basket of Fruits is used.
Generics honors this principle. Let’s look at this example:
Pair<Object> objectPair = new Pair<Integer>(new Integer(1), new Integer(2));
This code will produce a compile time error:
Error: line (9) incompatible types found :
com.agiledeveloper.Pair<java.lang.Integer> required:
com.agiledeveloper.Pair<java.lang.Object>
Now, what if you want to treat different type of Pairs commonly as one type? We will look at this later in the Wildcard section.
Before we leave this topic, let’s look at one weird behavior though. While:
is not allowed, the following is allowed, however:
Pair objectPair = new Pair<Integer>(new Integer(1), new Integer(2));
The Pair without any parameterized type is the non-generic form of the Pair class. Each generic class also has a non-generic form so it can be accessed from a non-generic code. This allows for backward compatibility with existing code or code that has not been ported to use Generics. While this compatibility has a certain advantage, this feature can lead to some confusion and also type-safety issues.
In addition to classes, methods may also be parameterized.
Consider the following example:
public static <T> void filter(Collection<T> in, Collection<T> out)
{
boolean flag = true;
for(T obj : in)
{
if(flag)
{
out.add(obj);
}
flag = !flag;
}
}
The filter() method copies alternate elements from the in collection to the out collection. The <T> in front of the void indicates that the method is a generic method with <T> being the parameterized type. Let’s look at a usage of this generic method:
filter()
in
out
<T>
void
ArrayList<Integer> lst1 = new ArrayList<Integer>();
lst1.add(1);
lst1.add(2);
lst1.add(3);
ArrayList<Integer> lst2 = new ArrayList<Integer>();
filter(lst1, lst2);
System.out.println(lst2.size());
We populate an ArrayList lst1 with three values and then filter (copy) its contents into another ArrayList lst2. The size of lst2 after the call to the filter() method is 2.
lst1
lst2
Now, let’s look at a slightly different call:
ArrayList<Double> dblLst = new ArrayList<Double>();
filter(lst1, dblLst);
Here I get a compilation error:
Error:
line (34) <T>filter(java.util.Collection<T>,java.util.Collection<T>)
in com.agiledeveloper.Test cannot be applied to
(java.util.ArrayList<java.lang.Integer>,
java.util.ArrayList<java.lang.Double>)
The error says that it can’t send an ArrayList of different types to this method. This is good. However, let’s try the following:
ArrayList<Integer> lst3 = new ArrayList<Integer>();
ArrayList lst = new ArrayList();
lst.add("hello");
filter(lst, lst3);
System.out.println(lst3.size());
Like it or not, this code compiles with no error, and the call to lst3.size() returns a 1. First, why did this compile and what’s going on here? The compiler bends over its back to accommodate calls to generic methods, if possible. In this case, by treating lst3 as a simple ArrayList, without any parameterized type that is (refer to the last paragraph in the “Generics and Substitutability” section above), it is able to call the filter method.
lst3.size()
lst3
filter
Now, this can lead to some problems. Let’s add another statement to the example above. As I start typing, the IDE (I am using IntelliJ IDEA) is helping me with code prompt as shown below:
It says that the call to the get() method takes an index and returns an Integer. Here is the completed code:
get()
ArrayList<Integer> lst3 = new ArrayList<Integer>();
ArrayList lst = new ArrayList();
lst.add("hello");
filter(lst, lst3);
System.out.println(lst3.size());
System.out.println(lst3.get(0));
So, what do you think should happen when you run this code? May be runtime exception? Well, surprise! We get the following output for this code segment:
1
hello
Why is that? The answer is in what actually gets compiled (we will discuss more about this in Part II of this article). The short answer for now is, even though code completion suggested that an Integer is being returned, in reality, the return type is Object. So, the string "hello" managed to get through without any error.
Object
Now, what happens if we add the following code:
for(Integer val: lst3)
{
System.out.println(val);
}
Here, clearly, I am asking for an Integer from the collection. This code will raise a ClassCastException. While Generics is supposed to make our code type-safe, this example shows how we can easily, with intent or by mistake, bypass that, and at best, end up with runtime exception, or at worst, have the code silently misbehave. Enough of those issues for now. We will look at some of these gotchas further in Part II. Let’s progress further on what works well for now, in this Part I.
ClassCastException
Let’s say we want to write a simple generic method to determine the maximum of two parameters. The method prototype would look like this:
public static <T> T max(T obj1, T obj2)
I would use it as shown below:
System.out.println(max(new Integer(1), new Integer(2)));
Now, the question is how do I complete the implementation of the max() method? Let’s take a stab at this:
max()
public static <T> T max(T obj1, T obj2)
{
if (obj1 > obj2) // ERROR
{
return obj1;
}
return obj2;
}
This will not work. The > operator is not defined on references. Hmm, how can I then compare the two objects? The Comparable interface comes to mind. So, why not use the comparable interface to get our work done:
>
Comparable
public static <T> T max(T obj1, T obj2)
{
// Not elegant code
Comparable c1 = (Comparable) obj1;
Comparable c2 = (Comparable) obj2;
if (c1.compareTo(c2) > 0)
{
return obj1;
}
return obj2;
}
While this code may work, there are two problems. First, it is ugly. Second, we have to consider the case where the cast to Comparable fails. Since we are so heavily dependent on the type implementing this interface, why not ask the compiler to enforce this? That is exactly what upper bounds does for us. Here is the code:
public static <T extends Comparable> T max(T obj1, T obj2)
{
if (obj1.compareTo(obj2) > 0)
{
return obj1;
}
return obj2;
}
The compiler will check to make sure that the parameterized type given when calling this method implements the Comparable interface. If you try to call max() with instances of some type that does not implement the Comparable interface, you will get a stern compilation error.
We are progressing well so far, and you are probably eager to dive into a few more interesting concepts with Generics. Let’s consider this example:
public abstract class Animal
{
public void playWith(Collection<Animal> playGroup)
{
}
}
public class Dog extends Animal
{
public void playWith(Collection<Animal> playGroup)
{
}
}
The Animal class has a playWith() method that accepts a Collection of Animals. The Dog, which extends Animal, overrides this method. Let’s try to use the Dog class in an example:
Animal
playWith()
Collection
Collection<Dog> dogs = new ArrayList<Dog>();
Dog aDog = new Dog();
aDog.playWith(dogs); //ERROR
Here, I create an instance of Dog and send a Collection of Dogs to its playWith() method. We get a compilation error:
Error: line (29) cannot find symbol
method playWith(java.util.Collection<com.agiledeveloper.Dog>)
This is because a Collection of Dogs can’t be treated as a Collection of Animals which the playWith() method expects (see the section “Generics and Substitutability” above). However, it would make sense to be able to send a Collection of Dogs to this method, isn’t it? How can we do that? This is where the wildcard or unknown type comes in.
We modify both the playMethod() methods (in Animal and Dog) as follows:
playMethod()
public void playWith(Collection<?> playGroup)
The Collection is not of type Animal. Instead it is of unknown type (?). Unknown type is not Object, it is just unknown or unspecified.
Now, the code:
aDog.playWith(dogs);
compiles with no error.
There is a problem however. We can also write:
ArrayList<Integer> numbers = new ArrayList<Integer>();
aDog.playWith(numbers);
The change I made to allow a Collection of Dogs to be sent to the playWith() method now permits a Collection of Integers to be sent as well. If we allow that, that will become one weird dog. How can we say that the compiler should allow Collections of Animals or Collections of any type that extends Animal, but not any Collection of other types? This is made possible by the use of upper bounds as shown below:
public void playWith(Collection<? extends Animal> playGroup)
One restriction of using wildcards is that you are allowed to get elements from a Collection<?>, but you can’t add elements to such a collection – the compiler has no idea what type it is dealing with.
Collection<?>
Let’s consider one final example. Assume we want to copy elements from one collection to another. Here is my first attempt for a code to do that:
public static <T> void copy(Collection<T> from, Collection<T> to) {…}
Let’s try using this method:
ArrayList<Dog> dogList1 = new ArrayList<Dog>();
ArrayList<Dog> dogList2 = new ArrayList<Dog>();
//…
copy(dogList1, dogList2);
In this code, we are copying Dogs from one Dog ArrayList to another.
Dog ArrayList
Since Dogs are Animals, a Dog may be in both a Dog’s ArrayList and an Animal’s ArrayList, isn’t it? So, here is the code to copy from a Dog’s ArrayList to an Animal’s ArrayList.
ArrayList<Animal> animalList = new ArrayList<Animal>();
copy(dogList1, animalList);
This code, however, fails compilation with error:
Error:
line (36) <T>copy(java.util.Collection<T>,java.util.Collection<T>)
in com.agiledeveloper.Test cannot be applied
to (java.util.ArrayList<com.agiledeveloper.Dog>,
java.util.ArrayList<com.agiledeveloper.Animal>)
How can we make this work? This is where the lower bounds come in. Our intent for the second argument of Copy is for it to be of either type T or any type that is a base type of T. Here is the code:
Copy
T
public static <T> void copy(Collection<T> from, Collection<? super T> to)
Here we are saying that the type accepted by the second collection is the same type as T is, or its super type.
I have shown, using examples, the power of the Generics in Java. There are issues with using Generics in Java, however. I will defer discussions on this to the Part II of this article. In Part II, we will discuss some restrictions of Generics, how Generics are implemented in Java, the effect of type erasure, changes to the Java class library to accommodate Generics, issues with converting a non-Generics code to Generics code, and finally, some of the pitfalls or drawbacks of Generics.
In this Part I we discussed about Generics in Java and how we can use it. Generics provide type-safety. Generics are implemented in such a way that it provides backward compatibility with non-generic code. These are simpler than templates in C++ and also there is no code bloat when you compile. In Part II we discuss the issues with using Gener. | http://www.codeproject.com/Articles/27611/Generics-in-Java-Part-I?fid=1475154&df=7&mpp=25&sort=Position&spc=Relaxed&tid=4473980 | CC-MAIN-2016-40 | refinedweb | 3,077 | 56.86 |
Introduction
In the Part 1 article in this series we provided a brief overview of the IBM InfoSphere DataStage product in the IBM Information Server suite of products and explained the role of the Oracle Connector stage in DataStage jobs. We explained the difference between conductor and player processes for the stage. We introduced the concepts of parallelism and partitioning and explained how their meaning differs between the DataStage environment and the Oracle database environment. We covered in detail the performance tuning and troubleshooting of the Oracle Connector stage configured to perform SQL statements on the database, including query statements used to fetch and lookup rows in the database as well as DML and PL/SQL statements used to insert, update and delete rows in the database.
In the Part 2 article of the series we will continue our coverage of the Oracle Connector performance tuning. First we will focus on performance aspects of the bulk load mode in the connector. In this mode the connector is used to load data to the database by utilizing the Oracle direct path interface. We will cover the operation of loading the records to the database as well as the operations that often need to be performed on the table indexes and constraints before and after loading the records. We will then present a few guidelines regarding the use of reject links with the connector. We will show how reject links can affect the connector performance in some cases and present some alternative approaches for your consideration. We will end the article by presenting you with a number of performance tuning strategies oriented towards the handling of different Oracle data types in the connector.
As was the case with the Part 1 article, the Part 2 article assumes the use of DataStage parallel jobs and Information Server Version 9.1, although some of the presented concepts would apply also to DataStage server jobs as well as the earlier Information Server versions.
Bulk load
When the Oracle Connector stage is configured to run in bulk load mode it utilizes the Oracle direct path interface to write data to the target database table. It receives records from the input link and passes them to Oracle database which formats them into blocks and appends the blocks to the target table as opposed to storing them in the available free space in the existing blocks. To configure the stage to run in bulk load mode the Write mode property of the stage needs to be set to value Bulk load. In this section, we will look at the connector properties and environment variables that play important role for tuning bulk load operation performance in the stage, as shown in Figure 1.
Figure 1. Connector properties that affect bulk load performance
Running the stage in Bulk load write mode typically results in better performance than when running it in Insert write mode. However, Bulk load write mode imposes several restrictions that are not present when the Insert write mode is used, especially when running the stage in parallel execution mode with Allow concurrent load sessions connector property set to value Yes. These restrictions are primarily related to the handling of triggers, indexes and constraints defined on the target table.
The information in the remainder of this section is intended to help you answer the following questions:
- Should I configure the stage to run in Bulk load or Insert write mode?
- If I choose to run the stage in Bulk load write mode, how should I configure the remaining settings in the stage in order to achieve optimal performance?
Data volume considerations
When deciding to configure the stage in Bulk load or Insert write mode, the critical aspect to consider is the data volume, in other words how many records on average is the stage writing to the target table in a single job invocation.
If the data volume is relatively small, the preferred option would be to use Insert write mode due to more flexibility that this write mode provides in respect to maintaining indexes, constraints and triggers of the target table. With the Insert write mode all the table indexes, constraints and triggers are enforced during the job execution and remain valid after the job completes. For example, if the stage that is running in Insert write mode tries to insert a record to the target table that violates the primary key constraint in the table, the stage can be configured to handle this scenario and to continue processing the remaining records in a number of ways that are not readily available when the stage is running in Bulk load write mode.
- It can have a reject link defined and configured to process records that have violated constraints. Each input record that violates table constraints will be routed to the reject link, with optionally included error code and message text explaining why the record was rejected. The existing row in the table for which the primary key was violated will remain in the table.
- It can be configured to run in Insert then update write mode in which case the existing row in the target table will be updated with the values from the record that violated the constraint.
- It can be configured to run in Insert new rows only write mode, in which case the record that violated the constraint will be skipped (ignored).
- It can be configured with the
INSERTstatement that contains
LOG ERRORS INTOclause which will result in Oracle redirecting records that violated the constraint along with the error information to the specified error logging table.
If the data volume is relatively large and the primary goal is to append the data to the target table as quickly as possible then the Bulk load mode should be considered.
The question is what should be considered a small data volume as opposed to large data volume. This decision will depend upon other things like how much performance you are willing to sacrifice when you choose the Insert write mode, which in turn depends on the size of the batch window that is available to push the data to the database and the available system resources. As a rule of thumb, thousands of records would likely be considered a small data volume, while millions of records would likely be considered a large data volume.
If the table does not have any indexes, constraints and triggers defined then Bulk load write mode would make a good candidate irrespective of the data volume involved because many of the restrictions imposed by this write mode would not apply any more. This includes the scenario where the table does in fact have indexes, constraints and triggers defined but you have procedures in place to disable or drop them prior to loading the data and then enable or rebuild them after the load is complete.
A simple test like the following can help you decide which write mode to choose:
- Design a test job with Oracle Connector stage that has an input link. In the test job provide records similar in volume and structure to the records that will be processed by the actual job in production.
- Ensure that the target database table used in the test job has the same definition as the table that will be used in production. If the production job performs incremental data loads to an existing table in the database, ensure that the table used for the test job is initially populated with a similar number of rows as the table that will be used in production.
- Run the job a few times in insert mode and capture the average time it took the job to complete.
- Manually disable any indexes, constraints and triggers on the target table and repeat the job a few times with the connector stage configured in bulk load write mode and capture the average time it took the job to complete.
- Compare the results from the previous two sets of job runs. If the numbers are similar that would be an indication that the Insert write mode is sufficient. If bulk load completes much faster, determine which additional operations need to be performed on the table to restore the constraints and indexes. Issue these operations from a tool like SQL*Plus to see how much time they take to complete. If that time combined with the time to bulk load the data is still considerably shorter than the time that it took to insert the data, then the bulk load mode is likely to be a better choice.
Handling triggers
If the target table has triggers that need to be fired for every row written to the table then the bulk load mode should not be used because database triggers are not supported in this mode. To bulk load data to a table that has triggers, the triggers need to be disabled prior to the load. The connector provides an option to disable the triggers automatically before to load and enable them after the load. Once the triggers are enabled they will fire only for the rows inserted to the table after that point. They will not fire for the rows that were loaded while the triggers were disabled.
Handling indexes
If the table has indexes, you can configure the stage to disable all the indexes
when performing the load. To do this set the Index maintenance connector
property to value Skip all. This is typically the best option to
choose as it allows you to run the stage without restrictions in parallel execution
mode and with any Partition type value selected on the
Partitioning page for the input link. However, all indexes are marked
UNUSABLE after the load completes and should be rebuilt. This is
where the data volume and the number of indexes play a critical role. If you are
incrementally loading a relatively small number of records compared to the number of
records already present in the table, it may be significantly faster to run the
stage in Insert write mode and have the database maintain indexes
throughout the insert operation instead of having to rebuild all the indexes after
bulk load. On the other hand, if the target table is empty and you need to load many
records to it then you will likely be benefited in using the Bulk load
write mode because the indexes will need to be built on all the rows, and in that
case, it may be faster to first load all the records without maintaining the indexes
and then build the indexes from scratch.
Instead of rebuilding the indexes in an external tool after the load, you can configure the stage to request from Oracle database to maintain the indexes during the load by setting the Index maintenance option property to Do not skip unusable or Skip unusable. The use of these two options is highly restrictive though and in most cases will not be the most optimal choice. Here are the things that you need to keep in mind if you consider using one of these two options:
- You will need to set Allow concurrent load sessions connector property to No, otherwise you will receive Oracle error "ORA-26002: Table string has index defined upon it" before any data is loaded. This requirement further implies that you will not be able to run the stage in parallel execution mode, except for one special case when loading to a partitioned table which is covered later in the article in the section that talks about loading records to a partitioned table.
- If the table has a
UNIQUEindex, the index will remain
VALIDafter the job completes but only if none of the loaded rows have violated the uniqueness imposed by the index. Otherwise the index will automatically be marked
UNUSABLEeven though the job completes successfully. Subsequent attempts to run the job will result in failure and the reported Oracle error depend on the Index maintenance option property value. For Do not skip unusable value the error will be "ORA-26028: index string.string initially in unusable state" and for Skip unusable value the error will be "ORA-26026: unique index string.string initially in unusable state".
- If the table has a
NON-UNIQUEindex, the index will remain
VALIDafter the load. If the index was marked
UNUSABLEbefore the load, the behavior will depend on the Index maintenance option property value. For Do not skip unusable value it will result in Oracle error "ORA-26002: Table string has index defined upon it" and for Skip unusable value the job will complete and the index will remain in
UNUSABLEstate.
You can configure the stage to automatically rebuild indexes after the load, and you
can control the
LOGGING and
PARALLEL clauses of the index
rebuild statements to optimize the index rebuild process. You can configure the
stage to fail the job if any index rebuild statement fails or to only issue warning
messages. To improve the performance the connector stage will rebuild the indexes
only when necessary. It will inspect the state of each index before rebuilding it.
If the index is marked
VALID the connector will skip rebuilding it.
This applies also to locally and globally partitioned indexes. In that case, the
connector will inspect the index partitions and subpartitions (if the index is
locally partitioned and the table is composite-partitioned) and rebuild only those
index partitions and subpartitions that are marked
UNUSABLE.
Handling constraints
The presence of integrity and referential constraints on the table has implications on the bulk load operation similar to those imposed by the indexes.
NOT NULL and
CHECK constraints do not need to be disabled
prior to the load. If any input records violate the constraint the job will fail and
no records will be loaded to the table. The exception to this is the case when the
stage is configured to run in parallel execution mode and some player processes
finish loading their record partitions before the player process that encountered
the record in error. In that case, the records loaded by the player processes that
have already finished will be committed to the table. From the performance point of
view, a good approach for dealing with these two types of constraints is to leave
them enabled on the table during the load but ensure that no records violate them by
performing the necessary checking and error handling upstream of the connector
stage. We provide some concrete suggestions later in the section that addresses reject links.
FOREIGN KEY,
UNIQUE and
PRIMARY KEY
constraints also do not need to be disabled prior to the load but are effectively
ignored by the load operation. Any records that violate these constraints will still
be loaded and the constraints will remain
VALIDATED and
ENABLED. To ensure that these constraints are valid for the rows
loaded by the connector stage you will need to disable them prior to the load and
enable them after the load. Keep in mind that these types of constraints are usually
accompanied by indexes. The database enforces
UNIQUE and
PRIMARY
KEY constraints through unique indexes and
FOREIGN KEY
constraints often have user-defined indexes created for them. In the earlier section
about handling indexes we explained the effects that indexes
have on the load operation. The same effects apply in this case and need to be taken
into consideration.
The connector provides an option to disable constraints before the load. You can specify this by setting the Disable constraints property to Yes. When this is done, the load operation typically completes faster than the insert operation. However, you have to enable and validate the constraints following the load and deal with the loaded records that have violated the constraints. That part can have a major impact on the overall performance. If you have a mechanism in place outside of the DataStage job to enable and validate the constraints after the load, then having the connector disable them before the load will provide for the best performance of the job itself since there will be no restrictions during the load due to constraints and no time will be spent at the end of the job to enable the constraints.
You can configure the stage to enable and validate the constraints after the load by
setting Enable constraints property to Yes. But, if any
constraint fails the validation at that time, the job will fail and the constraint
will remain in
DISABLED and
NOT VALIDATED state. You can
specify the exceptions table in the Exceptions table name property to which
the ROWIDs of the rows that violated the constraints will be stored, along with the
name of the constraint that they have violated. Doing this will still result in the
job failure if any of the constraints cannot be enabled, but you will be able to
determine which rows violated which constraint. You can further configure the stage
to delete the rows from the table that violated the constraints and to try to enable
the constraints again but that will discard all the rows in the table found to be in
violation of the constraints. To store them somewhere instead of discarding them you
can define a reject link for the stage and enable SQL error – constraint
violation condition on it. This will have a major impact on the performance
because it enforces sequential execution of the stage in a single player process.
Rather than accepting this restriction you should consider running the stage in
parallel execution mode and specifying Insert value for the Write
mode as that will potentially result in better overall performance. In
general, it is best to avoid the use of reject links if possible. The section about
reject links provides some concrete ideas in this
regard.
If you configure the stage to disable table constraints prior to the load and enable
them after the load and reject the records that have violated constraints, all of
the records in violation of
UNIQUE and
PRIMARY KEY
constraints defined on the table will be rejected. The connector doesn't make
distinction between the rows that existed in the table prior to load and those that
were loaded by the connector. For example, if a row loaded by the connector violated
the primary key constraint because another row with the same primary key value
existed in the table prior to the load, both rows will be deleted from the table and
sent to the reject link. To have the stage delete and reject only those rows in
violation of constraint that were not in the table prior to the load, you will need
to implement a custom solution such as the one shown in the next section.
Custom handling of indexes and constraints
When the connector stage is configured to handle triggers, indexes or constraints
automatically before and after the load, the connector will accomplish this by
issuing various Data Definition Language (DDL) statements on the database. For
enabling and disabling triggers, it will use
ALTER TRIGGER statements,
for rebuilding indexes it will use
ALTER INDEX statements and for
enabling and disabling constraints it will use
ALTER TABLE statements.
You can locate these statements in the job log when you run the job with
CC_MSG_LEVEL environment variable set to value
2. The
connector provides properties that you can use to include or remove particular
clauses in these statements. For example, it provides properties that allow you to
control the presence of
PARALLEL and
LOGGING clauses in
the
ALTER INDEX … REBUILD statement.
In some cases, you may wish to issue your own statements for handling triggers, indexes and constraints instead of having the connector automatically generate and run them. For example, you may wish to reference table triggers, indexes and constraints directly by their names instead of having the connector query the database dictionary views to dynamically discover those objects. You may wish to manually include certain clauses in these statements that you cannot configure through the connector properties. You can achieve this outside of the stage by running external tools and scripts before and after running the job. You can do it through the stage by utilizing the Before SQL statement and After SQL statement connector properties. In each of these two properties you can specify either a list of semicolon-separated SQL statements or an anonymous PL/SQL block to be executed. The connector stage executes these statements in the conductor process. It will run the Before SQL statement statements before any player process has been created for the stage to process the records. It will run the After SQL statement statements after all the player processes have finished loading the records. The following example illustrates how this can be accomplished.
Suppose that you want to load records to the table
TABLE_ITEMS which
stores information about certain type of items. The table has the
ID
column that represents item identifiers and serves as the primary key for the table,
which further means that the table has a unique index defined on this same column.
The table also has the
NAME column that stores item names and for which
a non-unique index is defined on the table. Finally, the table has the
LOAD_DATE column which represents the date and time when the item
row was loaded. The statement shown in Listing 1 can be
used to create this table.
Listing 1. Create table statement example
CREATE TABLE TABLE_ITEMS(ID NUMBER(10), NAME VARCHAR2(20), LOAD_DATE DATE); ALTER TABLE TABLE_ITEMS ADD CONSTRAINT PK_TABLE_ITEMS_ID PRIMARY KEY (ID); CREATE INDEX IDX_TABLE_ITEMS_NAME ON TABLE_ITEMS(NAME);
The load date is set when the job runs and is passed to the job as a job parameter
in format
YYYY-MM-SS HH24:MI:SS. Various mechanisms can be used to set
and pass this value to the job. For example, if you are invoking the job with
dsjob command from command line, the system date command can be
used to produce this value. For example, on Linux you can use the command shown in
Listing 2 to run the job and pass the
LOAD_DATE job parameter value based on the current system date and
time.
Listing 2. dsjob command invocation
dsjob -run -jobstatus -userstatus -param LOAD_DATE="`date +'%F %T'`" project_namejob_name
Another option for passing the current date and time would be to create a sequence
job that contains a single activity stage which in turn invokes the load job and
initializes the
LOAD_DATE job parameter to value
DSJobStartTimestamp, which is a built-in DataStage macro that
returns the job start timestamp.
Suppose the table contains many rows already and that you are running your job to
load additional rows. Further, most of the records that you are loading are for new
items, but some records represent existing items, meaning they have the
ID field value for which there is already a row in the table with
that same
ID value. You want to load the data as fast as possible, and
you also want the constraints and indexes to be enabled and valid for all the table
rows after the load. If any loaded rows violate the primary key constraint, you want
to store them to an alternate table
TABLE_ITEMS_DUPLICATES which has
the same columns as the
TABLE_ITEMS table but does not have any
constraints and indexes defined on it, as shown in Listing 3.
Listing 3. Create table statement for duplicates
CREATE TABLE TABLE_ITEMS_DUPLICATES(ID NUMBER(10), NAME VARCHAR2(20), LOAD_DATE DATE);
You could achieve this task with the stage configured to disable constraints before
the load, skip all indexes during the load and rebuild indexes and enable
constraints after the load. You could define reject link on the stage, select the
SQL error – constraint violations condition on the reject link and
direct this link to another Oracle Connector stage configured to insert rows to the
TABLE_ITEMS_DUPLICATES table. But this approach would have the
following limitations:
- Because you configured the stage to disable the constraints and reject rows that violated constraints after enabling them, the connector enforces sequential execution in a single player process. The rejecting of records on the reject link can only take place in a player process and in this case the rejecting is done when the connector enables the constraints after the load. For the player process to enable the constraints after it has finished loading the records, it needs to be certain that it is the only player process for the stage. But in practice, you may wish to run the stage in parallel execution mode so that you can utilize the available system resources on the engine tier.
- The connector rejects all the rows that violated the primary key constraint including the rows that existed in the table prior to the load but that happen to have the same
IDvalue as some of the rows that were loaded. This is because at the time the constraints are enabled all the rows are already in the table storage and no distinction is made at that time between the rows that were in the table prior to the load and those that were just loaded. In practice, you may wish to reject only the rows that were just loaded by the job and that violated the constraint but not any of the rows that existed in the table prior to running the job.
The following is one possible approach to avoid these limitations. Specify the statement shown in Listing 4 in the Before SQL statement property.
Listing 4. Statement to disable PK constraint
ALTER TABLE TABLE_ITEMS DISABLE CONSTRAINT PK_TABLE_ITEMS_ID;
This statement will explicitly disable the constraint
PK_TABLE_ITEMS on
the table prior to loading records to the table.
Set the Index maintenance option connector property to Skip all. Set the Allow concurrent load sessions connector property to value Yes. Do not define reject link for the stage. Configure the stage to run in parallel execution mode.
Define the exceptions table
TABLE_ITEMS_EXCEPTION. This table will be
needed for the
EXCEPTIONS INTO option of the
ENABLE
CONSTRAINT clause in the
ALTER TABLE statement that we will
want to invoke (explained further down). The format that the exceptions table needs
to follow can be found in the SQL script
UTLEXCPT.SQL included with the
Oracle database product installation. Based on the information in this script, to
create the
TABLE_ITEMS_EXCEPTION exceptions table for this example
execute the statement shown in Listing 5.
Listing 5. Statement to create the exceptions table
CREATE TABLE TABLE_ITEMS_EXCEPTIONS(ROW_ID ROWID, OWNER VARCHAR2(30), TABLE_NAME VARCHAR2(30), CONSTRAINT VARCHAR2(30));
Specify the PL/SQL anonymous block shown in Listing 6 in the After SQL statement property. The connector will submit it to the database once all the records have been loaded. See the comments embedded in the PL/SQL code for details about individual operations performed by this PL/SQL block.
Listing 6. Custom PL/SQL code for handling indexes and constraints
DECLARE -- Define the exception for handling constraint validation error ORA-02437. cannot_validate_constraint EXCEPTION; PRAGMA EXCEPTION_INIT(cannot_validate_constraint, -2437); BEGIN -- Truncate the tables TABLE_ITEMS_DUPLICATES and TABLE_ITEMS_EXCEPTIONS in case -- they contain any rows from the previous job runs. EXECUTE IMMEDIATE 'TRUNCATE TABLE TABLE_ITEMS_DUPLICATES'; EXECUTE IMMEDIATE 'TRUNCATE TABLE TABLE_ITEMS_EXCEPTIONS'; -- Try to enable the PK_TABLE_ITEMS_ID constraint and to build the underlying -- unique index. If any rows are in violation of the constraint then store -- their ROWIDs to the TABLE_ITEMS_EXCEPTIONS table. EXECUTE IMMEDIATE 'ALTER TABLE TABLE_ITEMS ENABLE CONSTRAINT PK_TABLE_ITEMS_ID EXCEPTIONS INTO TABLE_ITEMS_EXCEPTIONS'; EXCEPTION WHEN cannot_validate_constraint THEN -- The constraint could not be enabled. The constraint violations need to be -- handled before trying again to enable the constraint. BEGIN -- Set the default date format for the session to match the format of -- LOAD_DATE job parameter value. EXECUTE IMMEDIATE 'ALTER SESSION SET NLS_DATE_FORMAT=''YYYY-MM-DD HH24:MI:SS'''; -- Copy rows from TABLE_ITEMS table to TABLE_ITEMS_DUPLICATES table that -- violated the constraint and were loaded by this job. The rows that were -- loaded by this job will have the LOAD_DATE column value that matches the -- LOAD_DATE job parameter value. INSERT INTO TABLE_ITEMS_DUPLICATES SELECT ID, NAME, LOAD_DATE FROM TABLE_ITEMS WHERE ROWID IN (SELECT ROW_ID FROM TABLE_ITEMS_EXCEPTIONS) AND LOAD_DATE='#LOAD_DATE#'; -- Delete the rows from TABLE_ITEMS that were copied to TABLE_ITEMS_DUPLICATES. DELETE FROM TABLE_ITEMS WHERE ROWID IN (SELECT ROW_ID FROM TABLE_ITEMS_EXCEPTIONS) AND LOAD_DATE='#LOAD_DATE#'; -- Try to enable the constraint again. This time the operation should be -- successful. EXECUTE IMMEDIATE 'ALTER TABLE TABLE_ITEMS ENABLE CONSTRAINT PK_TABLE_ITEMS_ID'; -- Rebuild the non-unique index IDX_TBL_ITEMS_COL_NAME that was disabled -- during the load. EXECUTE IMMEDIATE 'ALTER INDEX IDX_TABLE_ITEMS_NAME REBUILD PARALLEL NOLOGGING'; END; END;
Utilizing Oracle date cache
You can configure the stage to take advantage of the Oracle date cache feature by
setting the Use Oracle date cache property to value Yes.
This feature is only available in bulk load mode and can significantly improve the
load performance when the input link has one or more
Date or
Timestamp columns, those columns are used to load values to
TIMESTAMP table columns and there are many repeating values for
those columns.
The connector prepares values for
TIMESTAMP table columns in a format
that requires them to be converted prior to being stored in the table. When the date
cache is utilized the result of the conversion is cached so when the same value
appears again on the input the cached result is used instead of performing the
conversion again.
You can specify the number of entries in the date cache through the Cache
size property. To avoid cache misses when looking up the values in the
cache, set this property a value no smaller than the total number of distinct values
expected to be encountered on the input per each
Date and
Timestamp link column.
You can also choose to disable the date cache if it becomes full during load by setting Disable cache when full property to Yes. One scenario where this can be useful is when you run the job repeatedly to load records to the target table. Most of the times the total number of distinct input values is smaller than the selected cache size but occasionally most of the input values are unique. In that case, disabling the cache when it becomes full will avoid performing lookups on the full date cache, which will improve the performance because each of the those lookups would result in a cache miss.
Note that each player process for the stage will use its own date cache when the stage is configured to run in parallel execution mode.
In Information Server Version 9.1 when
Date and
Timestamp
input link columns are used to load values to
DATE table columns
instead of
TIMESTAMP table columns, the values provided by the
connector are in the format that does not require conversion prior to storing them
in the target table. Therefore, the use of date cache feature will not provide much
benefit in this case. In Information Server releases prior to Version 9.1, the
conversion will be needed in this case so the use of date cache will provide
benefits similar to when loading values to
TIMESTAMP target table
columns.
Disabling redo logging
Database redo log tracks changes made to the database and is utilized in the database recovery process. When the connector is loading data to the target table, the redo log is updated to reflect the rows loaded to the table. In case of the database storage media failure, after the database is restored from the most recent backup, the rows loaded by the connector since the backup was taken are restored from the redo log. The ability to restore loaded rows from the redo log leads to potential decrease in performance because the load operation takes longer to complete when the redo log needs to be maintained and capture the data loaded to the table.
To speed up the load operation performed by the connector you can disable the redo
logging by setting the
NOLOGGING attribute on the target table. You can
set this flag when you create the table, or if you wish to set it only during the
load you can do that by using the Before SQL statement and After SQL
statement connector properties. In the Before SQL statement
property specify the statement
ALTER TABLE table_name
NOLOGGING and in the After SQL statement> property specify
the statement
ALTER TABLE table_name LOGGING, where
table_name is the name of the table to which the stage
is loading data. Another option is to use the Disable logging connector
property – when this property is set to No then the logging flag
(
LOGGING or
NOLOGGING) is used, and when it is set to
Yes, the redo logging is disabled.
Disabling redo logging for the bulk load can significantly improve the load performance. In case of the media failure the data that was loaded while redo logging was disabled can be restored from the database backup if the backup was taken after that load but it cannot be restored from the redo log. If restoring loaded data from the redo log in case of a media failure is not required, or if the data can be easily re-loaded in that case, then disabling redo logging should be considered for speeding up the load operation. Refer to the Resources section for the link to Oracle Database online documentation where you can find more information about redo log and considerations related to disabling the redo logging.
If you wish to use the connector in bulk load mode to quickly transfer the data to
the target database but at the same time you need the target table triggers, indexes
and constraints to be maintained as if a conventional insert statement was used, one
option for you to consider is to initially load the records to an intermediate
staging table that has the same definition as the target table, but does not contain
any triggers, indexes and constraints. After the load to the staging table
completes, issue a follow-up
MERGE or
INSERT statement
that reads rows from the staging table and writes them to the target table. This
statement can be specified in the After SQL statement connector property so
that it is automatically invoked at the end of the job. The use of the staging table
results will cause data to be written twice (once when loaded by the connector to
the staging table and once when copied to the target table by the follow-up
statement) but will, at the same time, have the following beneficial
characteristics:
- The records will be processed only once by the job. This is important when the processing of records in the job is resource intensive.
- The records will be moved from the DataStage engine hosts to the database host only once.
- The
INSERTstatement will be executing on the database side so during that time the DataStage engine hosts will be available to process other workloads.
- You can use database SQL functions in the
INSERTstatement to perform additional transformation on the values that were loaded to the staging table prior to storing them to the target table.
- If the
INSERTstatement that copies the values fails for some reason, the loaded records will remain available in the staging table and you can issue another
INSERTstatement after handling the rows in the staging table that have caused the error. You can utilize the
LOG ERRORS INTOclause in the
INSERTstatement where the rows that cause the
INSERTstatement to fail are directed to the error logging table along with the details indicating the reason for their error.
- The indexes, constraints and triggers on the target table will remain enabled, valid and enforced at all times.
Manual load
You can configure Oracle Connector stage to run in manual load mode in which case the stage will write records to a file on the engine tier host that can then be passed to the Oracle SQL*Loader utility to load the records to the target table. To configure the stage in this mode set the Manual load property to Yes. Loading records in manual load mode is generally slower than loading them directly to the target table but in some cases can be very useful, as shown next.
One example would be if you want to run the job to extract and process the data at a time when the target database is not available for load. You can configure the stage to store the data to a file and then later, when the database is available, you can load the data to it using the SQL*Loader utility. The data will be already processed at that time and ready to be stored to the target table. In cases, when the processing phase of the job takes a long time to complete, such as for example when pulling data from an external source that is slow or when complex and CPU intensive transformations are involved in the job, the manual load approach can result in significant time savings.
Note that the connector actually generates two files in this mode – the data file with the records to be loaded and the control file used to pass options to the SQL*Loader and inform it about the location and format of the data file. You can also transfer the generated files to a host that doesn't have DataStage installed and load the records from there, presuming that you have SQL*Loader tool available on that host. If that host is the Oracle server host the load operation will be local to the database.
Restarting the load and backing up the data that needs to be loaded is also convenient with this approach because the generated data file is already in the format supported by the SQL*Loader tool.
Note that when the stage is configured to run in manual load mode it will try to connect to the database even if it will not be loading records to the database on that connection. If the target database is not available to accept connections at the time when you run the job, you can work around the problem by pointing the stage to another database that is available at that time, so that the stage can establish a connection and complete the manual load operation.
If it is critical to ensure the stage does not connect to any database in manual load mode, you can accomplish that by ensuring that the following conditions are met:
- The connector is not collecting operational metadata. To ensure this define the
DS_NO_PROCESS_METADATAenvironment variable for the job and set it to value
FALSE.
- The connector does not issue statements on the database to be interpreted as events by the IBM InfoSphere Guardium Activity Monitor. To ensure this, ensure that you don't have
CC_GUARDIUM_EVENTSenvironment variable set for the job.
- The Run before and after SQL statements property must be set to No. This ensures that the connector is not trying to perform any SQL statements on the database prior to and after loading the records to a file.
- The Table action property must be set to Append. This ensures the connector is not trying to create, drop or truncate a database table prior to performing the load operation.
- The Perform operations before bulk load and Perform operations after bulk load properties must not specify any index and constraint maintenance operations.
When all of these conditions are met then the connector will load records to the data file without connecting to the database. You will still need to provide Username and Password property values though in order to be able to compile the job. You can set them to some dummy values or you can set the Use external authentication property to value Yes which will disable the Username and Password properties so you will not need to provide values for them.
Note that even if you configure the stage not to connect to the database, you will still need to have Oracle Client product installed on the DataStage engine host on which the connector runs, otherwise the connector will fail the initialization when the job starts and the job will fail.
Array size and Buffer size
When the Oracle Connector stage is configured to run in bulk load mode, it reads records from the input link and stores them in the Oracle specific direct path column array. The connector does not create this array in memory like the array that it uses for fetch and DML operations. Instead, Oracle client creates this array on behalf of the connector. The size of this array in number of records is specified through the Array size property. The default value is 2000.
When the column array becomes full the connector requests from Oracle client to convert the array to the internal stream buffer format and to send this buffer to the Oracle server. The size of this stream buffer (in kilobytes) is specified through the Buffer size connector property. The default value for this property is 1024 (1 megabyte).
Depending on the length of the records, it is possible that the specified Buffer size value will not be sufficient to store the Array size number of records in stream format. In that case, the Array size will be automatically reduced to a smaller value and Oracle Connector will log a message to indicate the newly enforced number of records that it will load at a time. Note that regardless of the specified Buffer size value the maximum Array size value will be 4096.
When configuring the Array size and Buffer size values in order to achieve optimal performance, you can apply the following approach:
- Set the Array size to 4096 (maximum supported) and Buffer size to 1024.
- Run the job and check if the connector reported a message in the job log that it will load less than 4096 records at a time. If it did then go to step 3. If it did not then go to step 4.
- Continue trying with increasingly larger Buffer size values until the performance worsens or the connector stops reporting the message, whichever takes place first.
- Continue trying with decreasingly smaller Buffer size values until the performance worsens.
The connector can be configured to run in parallel execution mode and load data from multiple player processes with each player process loading a subset (partition) of input link records to the target table. In this case, parallel load must be enabled for the target table segment. To do this set the Allow concurrent load sessions connector property to value Yes. If this property is set to No and you attempt to load records from multiple player processes, you will receive Oracle error "ORA-00604: error occurred at recursive SQL level 1" possibly accompanied by the Oracle error "ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired".
The exception to the above rule is when the stage is loading rows to a Range, List or Interval partitioned table (which can optionally be subpartitioned) and the Partition type setting for the link is set to Oracle Connector. This exception scenario is covered in the next section which discusses loading records to a partitioned table.
Setting Allow concurrent load sessions property to Yes has implications on how indexes are handled during load. For more information refer to the earlier section which discussed the handling of indexes.
When you wish to load records to a single partition or subpartition of the target table, set the Table scope connector property to Single partition or Single subpartition value, and specify the name of the partition or subpartition in the Partition name or Subpartition name property. Doing this will result in locking only the segment of the specified partition or subpartition and will also return error if any input record needs to be stored to another partition, that way providing verification mechanism that the records belong to the partition or subpartition specified in the stage.
When you wish to load records to multiple partitions of a table, set the Table scope connector property to value Entire table. In respect to triggers, indexes and constraints defined on the table, the concepts that apply to non-partitioned tables apply in this case as well. When rebuilding indexes on partitioned tables, the connector will automatically rebuild only those index partitions or subpartitions that have been marked unusable during the load process. This applies to both locally and globally partitioned indexes.
When you wish to load records in parallel and make sure that each player process is loading records to a dedicated table partition, you can do that for a table partitioned based on Range, List or Interval partitioning strategy (and is optionally subpartitioned) by specifying the Oracle connector value for the Partition type setting for the input link. In this case, each player process of the stage will be associated with a single dedicated table partition and will load data to that partition's segment (or its set of subpartition segments if the table is composite-partitioned) and because no two player processes will be accessing the same segment, the Allow concurrent load sessions connector property can be set to No. The exception is when the table has a globally partitioned index. In that case, the job will fail with the Oracle error "ORA-26017: global indexes not allowed for direct path load of table partition string".
When considering using Oracle connector value for Partition type when loading records to a partitioned table, consult the list provided in Part 1 article in the section that covered DML operations on a partitioned table.
Reject link considerations
Reject links are a convenient mechanism for handling input records for which the database reported an error when it tried to perform the operation specified in the stage. You can enable reject conditions on the reject link under which the records in error should be rejected, you can choose to include error code and error text information with the rejected records and you can point the reject link to another stage to store the rejected records to a location of your choice, such as a file or another database table.
However, reject links can have negative effect on the performance. When the stage has a reject link defined, it needs to inspect the error code for each record reported by the database to be in error, compare it to the conditions defined on the reject link and submit the record to the reject link, potentially accompanied by the error code and error text information. These operations consume system resources and affect the job performance as a whole.
You saw earlier that in order to use reject links to handle records with constraint
violations in bulk load write mode, you must run the stage in sequential execution
mode. Doing this will result in rejecting all the rows from the table that violated
the constraints, including those rows that were present in the table before the
load. Also, when you run
INSERT statements with the
APPEND_VALUES optimization hint you cannot rely on the reject link
functionality.
To avoid these limitations associated with the use of reject links, avoid using the reject links altogether and try to eliminate the records in error BEFORE they reach the Oracle Connector stage or to eliminate them AFTER the stage has completed the operation for which it was configured.
As an example of handling records in error BEFORE they reach the Oracle Connector
stage, let us consider the
NOT NULL and
CHECK constraints
in a table created using the statement shown in Listing 7.
Listing 7. Statement to create test table
CREATE TABLE TABLE_TEST(C1 NUMBER(10) CONSTRAINT CONSTR_NOT_NULL_C1 NOT NULL, C2 VARCHAR2(20), C3 DATE, CONSTRAINT CONSTR_CHK_C1 CHECK (C1 > 20000 AND C1 < 70000));
The constraints on this table require that the values for field
C1 are
not
NULL and that they are between 20000 and 70000 (exclusive). Instead
of configuring the Oracle Connector to use a reject link for handling these
constraints, the constraint checking can be done in the stages upstream of the
Oracle Connector stage. For example, you can use the Filter stage with one output
link leading to the Oracle Connector stage and another reject link leading to a
Sequential File stage (or any other stage to which you wish to direct the bad
records). In the Where Clause property of the Filter stage you would
specify a condition as the one shown in Listing 8.
Listing 8. Filter stage condition
C1 IS NOT NULL AND C1 > 20000 AND C1 < 70000
You would also set the Output Rejects property of the Filter stage to value True. The Filter stage would pass to its output link the records that meet this constraint, and would send the remaining records to the reject link.
This configuration is shown in Figure 2.
Figure 2. Handling constraint violations with a Filter stage
For more complex constraint expressions or to annotate rejected records with the
information describing the reason why they were rejected, you could use the
Transformer stage. For the previous example, the Transformer stage would have one
input link (for example, input_records) with columns
C1,
C2 and
C3 and would have three output links. All three
output links would copy the columns from the input link. The first output link would
have the constraint
IsNull(input_records.C1) defined and an extra
column to contain the constant literal value
"C1 value is null.". The
second output link would have the constraint
input_records.C1 <= 20000 Or
input_records.C1 >= 70000 defined and an extra column to contain
literal value
"C1 value: " : input_records.C1 : " is not between 20000 and
70000.". The first and the second link would lead to the Funnel stage
that would combine the records on them and store them to a reject file using the
Sequential File stage. The third output link would lead to the Oracle Connector
stage that would write them to the table. This configuration is shown in Figure 3.
Figure 3. Handling constraint violations with a Transformer stage
Click to see larger image
The following are some examples of handling records with errors AFTER the stage has completed moving the records to the database:
- When bulk loading to the target table with constraints, load the records to a staging table instead that doesn't have the constraints defined on it and then issue a follow up
INSERTstatement to move the records to the target table. Specify
LOG ERRORS INTOclause in the insert statement so that the records in error are directed by to an alternate table. For more information, refer to the earlier section that covered loading of records to a staging table.
- When bulk loading to the target table with constraints, configure the stage to disable the constraints before the load and enable them after the load, specify the exceptions table but do not configure the stage to automatically process the exceptions. Instead, process them manually after the job completes. Alternatively, perform custom handling of constraints and indexes using a strategy such as the one suggested earlier in the article.
Data types
In this section, we will look at how selection of data types for the link columns can affect the connector performance. The default link column data types are provided by the connector when it imports the table definition. They are also provided by the connector at runtime when the stage has an output link without columns defined on it and the runtime column propagation feature is enabled for the link.
The default link column data types represent the closest match for the corresponding table column data types in terms of the ranges of values that they support. In some cases, you may have an additional insight regarding the format and range of the actual data values processed by the stage which may in turn allow you to select data types for the link columns that result in better performance in the connector. In this section, a number of such scenarios are examined.
You should always start with the link columns of default data types. To determine
the default data types, use the connector to import the table that you will be
accessing from the stage and look at the column definitions in the imported table
definition. If you configured the stage to issue a complex
statement that references multiple tables then you will not be able to import a
single table definition for that statement. What you can do in that case is create a
temporary view based on that statement and then import the view instead. When your
stage is configured to execute a PL/SQL anonymous block, you will not be able to
import a table or a view to determine the default link column data types. In this
case, you can analyze the PL/SQL block and for each bind parameter in the PL/SQL
block (specified in
:param_name or
ORCHESTRATE.param_name
format) determine the object in the database to which it corresponds. In some cases,
this object will be a column in a table or a view, in some cases it will be a
parameter of a stored procedure, but in all cases that object will have an
associated Oracle data type. Create a temporary table with the columns of those same
data types and import that table.
Refer to the Resources section for the link to the Information Server Information Center where you can find the table that shows how Oracle Connector maps Oracle data types map to DataStage data types by default.
Once you have placed columns of default data types on the link, run the job and measure the performance. Then apply the information from the remainder of this section to further optimize the job by changing link column data types in such a way so that the performance improves while no loss of data occurs.
Representing CHAR, VARCHAR2, NCHAR and NVARCHAR2 table column data types
In this section, we will look at the
CHAR,
VARCHAR2,
NCHAR and
NVARCHAR2 Oracle data types.
When you use Oracle Connector to import tables with columns of these four data
types, the connector will model
CHAR and
VARCHAR2 table
columns as DataStage
Char and
VarChar link columns, and it
will model
NCHAR and
NVARCHAR2 table columns as DataStage
NChar and
NVarChar link columns. The exception to this
rule is the case when
CHAR and
VARCHAR2 table column
definitions are based on the character length semantics and the database
character set is a multi-byte character set such as
AL32UTF8. In that
case,
CHAR and
VARCHAR2 columns are modeled as
NChar and
NVarChar link columns.
If your database character set is a multi-byte character set such as
AL32UTF8 and your table columns use character length semantics,
such as a column defined to be of
VARCHAR2(10 CHAR) type, and all the
actual data values use only single-byte characters, then use the
Char
and
VarChar columns on the link (with Extended attribute left blank)
instead of the
NChar and
NVarChar columns.
Minimize the character set conversions in the stage by ensuring that the effective
NLS_LANG environment variable value for the job is compatible with
the NLS map name specified for the job. This is important not just for the
performance reasons, but for the correct functioning of the stage in regard to
interpretation of character set encoding of the values. To determine the effective
NLS_LANG value for the job, run the job with the
CC_MSG_LEVEL environment variable set to
2 and inspect
the job log. The connector will log the effective
NLS_LANG environment
variable value. If you wish to use a different value, you will need to define
NLS_LANG environment variable for the job and set it to the desired
value. Oracle Connector stage editor does not have the NLS tab so the NLS
map name selected for the job is used for the stage. You can specify this value for
the job by opening the job in the DataStage Designer, selecting the menu option
Edit and then clicking Job properties and the
NLS page in the dialog and providing the value in
Default map for stages setting. Table 1
shows examples of compatible NLS map name and
NLS_LANG values. Note
that you should set the language and category portions of the
NLS_LANG
value to match the locale of your DataStage engine host.
Table 1. Compatible NLS map name and NLS_LANG values
If the values processed by the stage are based on only ASCII characters, set the NLS
map name to
ASCL_ASCII and set
NLS_LANG to
AMERICAN_AMERICA.US7ASCII. Otherwise if possible set the
NLS_LANG value to match the database character set. For example, if
the stage is writing records to a database with the
AL32UTF8 database
character set, use the
NLS_LANG value of
AMERICAN_AMERICA.AL32UTF8 and make sure the NLS map name for the
job is set to
UTF-8. To determine the database character set you can
connect to the database from SQL*Plus and issue the statement shown in Listing 9.
Listing 9. Determining the database character set
SELECT VALUE FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER='NLS_CHARACTERSET';
Always make sure that the values in the job represented by the
Char and
VarChar link columns are actually encoded as indicated in the NLS
map name and
NLS_LANG settings and that the values for
NChar and
NVarChar link columns are UTF-16 encoded.
Note that the connector handles
Char and
VarChar link
columns which have the Extended attribute set to Unicode the same way it handles
NChar and
NVarChar link columns.
Make sure that all
Char,
VarChar,
NChar and
NVarChar link columns have Length attribute set to a concrete
value. Leaving the Length attribute empty can have severe negative impact on the
performance of the stage. When the Length is not specified, the connector makes
assumption that the length is the maximum one allowed for the respective bind
parameters, which is 4000 bytes. The connector will therefore allocate the memory
with the assumed column lengths, which may result in large memory consumption.
Suppose that the connector is configured to insert rows to a database table with the
default Array size of 5000, and the link contains 50
VarChar
columns that do not have the Length attribute set. The connector will allocate (in
each player process) approximately the total of 50 (columns) x 5000 (array size) x
4000 (maximum value length) bytes which is close to 1 gigabyte. If the actual table
columns in the database are defined as
VARCHAR2(50 BYTE) then by
setting the Length attribute of the link columns to 50, the Array size
could be left set to 5000 and the size of the allocated memory per player process
would drop to 50 (columns) x 5000 (array size) x 50 (maximum value length) bytes
which is about 12 megabytes.
Do not use
LongVarBinary,
LongVarChar and
LongNVarChar link columns to represent
CHAR,
VARCHAR2,
NCHAR and
NVARCHAR2 table
columns. These link columns are appropriate for handling table columns of
LONG,
LONG RAW,
BLOB,
CLOB,
NCLOB and
XMLType columns as discussed later in the text. Even in those cases, if the actual
values for those columns are less than 4000 bytes in size, the
Binary,
VarBinary,
Char,
VarChar,
NChar and
NVarChar link columns should be used instead
of
LongVarBinary,
LongVarChar and
LongNVarChar link columns. When
LongVarBinary,
LongVarChar and
LongNVarChar link columns are used,
the connector disables array processing and enforces the Array size value
of 1, even if you specify small Length attribute values for those link columns.
Enforcing the Array size of 1 can have a major negative impact on the
performance.
Ensure that the connector is using optimal character set form for the
Char,
VarChar,
NChar and
NVarChar link columns. Character set form is an Oracle database
concept and can be one of two values:
IMPLICIT and
NCHAR.
The optimal character set form for the connector to use for
CHAR and
VARCHAR2 table columns is
IMPLICIT and for
NCHAR and
NVARCHAR2 table columns it is
NCHAR.
The connector will automatically use optimal character set form if you use
Char and
VarChar link columns for
CHAR
and
VARCHAR2 table columns and
NChar and
NVarChar link columns for
NCHAR and
NVARCHAR2 table columns. As stated earlier, note that
Char and
VarChar link columns with Extended attribute
set to Unicode are treated as
NChar and
NVarChar link
columns.
In some cases, you may end up using link columns that result in sub-optimal
character set form selection in the connector. For example, when you use the
connector to import a table with
VARCHAR2 columns that are based on
character length semantics and the database is based on
AL32UTF8
(multi-byte) character set, the connector will import that column as
NVarChar link column. By importing it this way the connector is
able to set the Length attribute of the link column to match the length specified in
the table column definition while ensuring that data truncation does not happen at
runtime. At runtime the connector will choose the character set form
NCHAR for this link column which will not be the optimal choice for
the
VARCHAR2 target table column. If this link column happens to be
referenced in a
WHERE clause in the statement specified for the stage
then any index defined on this table column will not be utilized. This applies to
SELECT statements used by the stage configured for sparse lookup
mode, as well as
UPDATE and
DELETE statements when the
connector is configured to write records to the database in Insert,
Update, Delete, Insert then update, Update then
insert or Delete then insert write modes. The skipping of the
index in this case can have significant performance implications. To force the
connector to use the optimal character set form in this case, you could apply one of
the following strategies:
- Change the link column data type from
NVarCharto
VarChar. In this case, you will also need to update the Length attribute for this link column by multiplying it by 4 because UTF-8 characters can consume up to 4 bytes. As we mentioned before, if you know that all the actual data values processed by the stage use single-byte characters, then you can leave the Length attribute unchanged.
- Change the data type of the table column to
NVARCHAR2. In this case, the
NCHARcharacter set form selected by the connector will be the optimal choice. In many practical cases though, making changes to the existing column definitions in the database tables is not an option.
- Utilize the
CC_ORA_BIND_FOR_NCHARSenvironment variable. Set the value of this environment variable for the job to contain comma-separated list of link column names for which you wish the connector to use
NCHARcharacter set form. If you specify the special value
(none)for this environment variable, then the connector will use
IMPLICITcharacter set form for all columns on the link. Another special value is
(all)(the parenthesis are part of the value) which will result in the connector using
NCHARcharacter set form for all the columns on the link. If none of the tables in your database have
NCHAR,
NVARCHAR2and
NCLOB(discussed later) columns then you can define
CC_ORA_BIND_FOR_NCHARSenvironment variable at the DataStage project level and specify
(none)as its default value (again the parenthesis are the part of the value). That way the connector will always use
IMPLICITform which will always be the optimal character set form.
Representing NUMBER, FLOAT, BINARY_DOUBLE and BINARY_FLOAT table column data types
In this section we will take a look at the NUMBER, FLOAT, BINARY_DOUBLE and BINARY_FLOAT Oracle data types.
For
NUMBER(p, s) table column where precision
p and scale
s are explicitly
specified, the default and at the same time the optimal link column data type is
Decimal with Length attribute set to
p and
Scale attribute set to
s. The exception to this rule are the
scenarios where
s < 0 and where
s
>
p. In those cases, the values need to be provided to the
stage and retrieved from the stage in text format so the link column should be of
VarChar type with the Length attribute value sufficiently large
value to accommodate all possible values for the respective column in the table.
For
NUMBER(p) table column where precision
p is explicitly specified but scale
s is omitted, the default link column data type is
Decimal with the Length attribute set to
p
and
Scale attribute left empty. If the remaining stages in the
job are operating on integer (
TinyInt,
SmallInt,
Integer or
BigInt) values for these columns then you
may be able to achieve better performance if you preserve those integer data types
for the columns on the link of the Oracle Connector stage than if you change them to
the
Decimal data type. This is especially the case when the connector
stage is writing to a partitioned table and the corresponding table column is used
as part of the partition key of the table and you specified Oracle
connector value for the Partition type setting of the link. If you
decide to replace
Decimal(p) link columns with integer link
columns keep in mind the differences between these link data types. Although they
all represent integer values, the ranges of values that they cover differ.
For example, if a table column is defined as
NUMBER(5) and you choose
to replace the default
Decimal(5) link column with a
SmallInt link column, the values supported by the
NUMBER(5) table column will be in the range [-99999, 99999] and the
values supported by the
SmallInt link column will be in the range
[-32768, 32767]. If you set the Extended attribute of the link column to Unsigned,
the values that are supported by the link column will be in the range [0, 65535]. In
both cases, the range of the link column will be a sub-range of the respective table
column. So you should only use this approach if you are sure that all the values
that will be processed by the stage belong to both ranges. In the previous example,
you may choose to use
SmallInt link column (signed or unsigned) when
writing values to the database, but to use
Integer column (signed) when
reading values from the database because the range of the supported values for the
Integer (signed) link column is [-2147483648, 2147483647] and is
therefore sufficient to support any value supported by the
NUMBER(5)
table column. Note that if you choose to use
BigInt column on the link,
the range of supported values for the link column will be [-9223372036854775808,
9223372036854775807]. And if the Signed column attribute is set to value Unsigned
the range will be [0, 18446744073709551615]. However, the choice of
BigInt link column will likely result in smaller performance gains
than if
TinyInt,
SmallInt or
Integer link
column is used, and may even produce worse performance than the
Decimal
link column. This is because the connector will exchange
BigInt link
columns values with the database as text values which will involve additional data
type conversions.
For the
NUMBER table column where both precision
p and scale
s are omitted, the
default link column data type is
Double. But if you are certain that
the actual values in the table are all integer values, you may choose to use link
column of one of the integer data types (
TinyInt,
SmallInt,
Integer,
BigInt) or you may choose
to represent the values as decimal values with a specific precision and scale by
using a link column of
Decimal data type. Keep in mind that the
supported ranges of these link column data types are not fully compatible so only
use these link column data types in cases when they will not result in the loss of
data precision due to the rounding or in cases when such a rounding is acceptable.
For
FLOAT,
BINARY_DOUBLE and
BINARY_FLOAT
table columns, the default and optimal column types to use for the corresponding
link columns are
Float,
Double and
Float,
respectively.
Representing DATE, TIMESTAMP and INTERVAL table column data types
In this section we will take a look at the DATE, TIMESTAMP and INTERVAL Oracle data types.
For
DATE,
TIMESTAMP(0),
TIMESTAMP(0) WITH TIME
ZONE and
TIMESTAMP(0) WITH LOCAL TIME ZONE table columns the
default link column data type is
Timestamp with Extended attribute left
empty. This is the optimal link column type to use in case when hour, minute and
second portions of the values need to be preserved. Otherwise, the
Date
link column will likely result in better performance because the stage does not need
to handle hours, minutes and seconds in the values. Note that if you need to
preserve time zone information in the values you need to use character based link
columns such as
VarChar. This is covered in more detail later in this
section.
For
TIMESTAMP(p),
TIMESTAMP(p) WITH TIME
ZONE and
TIMESTAMP(p) WITH LOCAL TIME ZONE table
columns where precision
p > 0, the default link column data
type is
Timestamp with Extended attribute set to Microseconds. The
processing of
Timestamp link columns with Extended attribute set to
Microseconds takes considerably more time in the Oracle Connector than the
processing of
Timestamp link columns which have the Extended attribute
left empty. This is especially the case for
SELECT and DML
(
INSERT,
UPDATE,
DELETE) statements and
to a lesser extent bulk load operation. Avoid the use of
Timestamp link
columns with Extended attribute set to Microseconds unless you absolutely need to
support fractional seconds in the timestamp values.
The connector models
TIMESTAMP(p),
TIMESTAMP(p)
WITH TIME ZONE and
TIMESTAMP(p) WITH LOCAL TIME
ZONE table columns with
p > 6 as
Timestamp link columns with Extended attribute set to Microseconds.
The connector will in this case perform truncation of fractional seconds to the
microsecond precision. To handle timestamp values with fractional second precision
greater than 6 up to and including 9, use the
VarChar link column. In
that case, the connector will exchange timestamp values with Oracle client as text
values. Ensure in this case that the
NLS_TIMESTAMP_FORMAT session
parameter is set to match the format of the actual values provided to the stage on
its input link. The same format will be used by the stage for the values that it
provides on its output link. You can set this session parameter using the Before
SQL (node) statement property, by issuing an
ALTER SESSION
statement. For example, to handle timestamp values as text values with 9 fractional
digits, a statement like the one shown in Listing 10 could be
specified in the Before SQL (node) statement property.
Listing 10. Setting the timestamp format with 9 fractional seconds for the session
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='YYYY-MM-DD HH24:MI:SS.FF9'
You can apply this same approach for
TIMESTAMP(p) WITH TIME
ZONE,
INTERVAL YEAR (yp) TO MONTH and
INTERVAL DAY(dp) TO SECOND(sp) table columns. For
TIMESTAMP(p) WITH TIME ZONE table columns set the
NLS_TIMESTAMP_TZ_FORMAT session parameter to the appropriate
format, which includes the time zone format if the time zone needs to be preserved
in the values. The
INTERVAL(yp) TO MONTH and
INTERVAL
DAY (dp) to SECOND(sp) values should be specified as
interval literals in the format defined in the Oracle database. Refer to the Resources section for the link to Oracle Database online
documentation where you can find more details and examples for
INTERVAL
Oracle data types and interval literals.
If you are using
Timestamp (with Microseconds) link columns because you
need to preserve microseconds in timestamp values, you can again apply this same
approach as it can potentially lead to improved performance. For example if you are
fetching values from a
TIMESTAMP table column and the downstream stage
is capable of handling timestamp values in text format, then replace the
Timestamp (with Microseconds) column on the output link of Oracle
Connector stage with a
VarChar column so that the connector provides
timestamp values in text format. Likewise, if you are inserting values to a
TIMESTAMP table column and the upstream stage can provide the
timestamp values in text format, replace the
Timestamp (with
Microseconds) column on the input link of Oracle Connector stage
with a
VarChar column so that the connector accepts timestamp values in
text format. The Length attribute of the
VarChar column and the
NLS_TIMESTAMP_FORMAT session variable must be appropriate for the
format of the timestamp text values. For example, to use the default
YYYY-MM-DD hh:mm:ss.ffffff format for the timestamp values, set the
Length attribute of the
VarChar column to 26 and specify the statement
shown in Listing 11 in the Before SQL (node)
statement connector property.
Listing 11. Setting the timestamp format with 6 fractional seconds for the session
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='YYYY-MM-DD HH24:MI:SS.FF6'
Even in the cases when the remaining stages of the job are not capable of handling
timestamp values with microseconds in text format and require that the
Timestamp (with
Microseconds) link column is used, you
can choose to map this column to a
VarChar column explicitly for the
Oracle Connector stage by using a Transformer stage as shown below:
- If the link column is located on the output link of the Oracle Connector stage and the link name is out_link, insert a Transformer stage between the Oracle Connector stage and the next stage in the job flow and map the
VarChar(26)column on the input link of the Transformer to the
Timestamp(with Microseconds) column on the output link of the Transformer stage using the derivation shown in Listing 12.
Listing 12. Converting string to timestamp
TimestampToString(in_link.C1,"%yyyy-%mm-%dd %hh:%nn:%ss.6")
- If the column is located on the input link of the Oracle Connector stage and the link name is in_link, insert a Transformer stage between the previous stage in the job flow and the Oracle Connector stage and map the
Timestamp(with Microseconds) column on the input link of the Transformer stage to the
VarChar(26)column on the output link of the transformer stage using the derivation shown in Listing 13.
Listing 13. Converting timestamp to string
TimestampToString(in_link.C1,"%yyyy-%mm-%dd %hh:%nn:%ss.6")
Representing RAW table data type
In this section we will take a look at the
RAW Oracle data type.
Use
VarBinary link columns to read and write
RAW table
column values in their native raw binary form. For the Length attribute of the link
columns specify the sizes of the respective
RAW table columns.
Use the
VarChar link columns if you need to read or write binary values
as sequences of hexadecimal digit pairs. In this case, make sure to set the Length
attribute of the links columns to a value twice the size of the respective
RAW table columns because each byte in a
RAW table
column value will be represented by a pair of hexadecimal digit (0-9 and A-F)
characters. The use of
VarChar link columns will allow you to read and
write
RAW table column values in hexadecimal digit pair's form without
a need to handle the conversions in another stage in the job.
Representing LONG, LOB and XMLType table column data types
In this section we will take a look at the
LONG,
LONG RAW,
BLOB,
CLOB,
NCLOB and
XMLType Oracle data types.
The preferred link column data types for these table column data types are shown in Table 2.
Table 2. Preferred link column data types for LONG, LOB and XMLType table column data types
You can use other combinations of link and table column data types as well but doing that may negatively affect the performance and will likely require you to use additional data type conversion functions in your SQL statements in order to avoid data type mismatch errors.
Every time you have
LongVarBinary,
LongVarChar or
LongNVarChar columns defined on the link the connector enforces
Array size value of 1 for that link which has negative effect
on performance. This is the main reason why
LongVarBinary,
LongVarChar and
LongNVarChar link columns should only
be used when handling the table column data types covered in this section.
When the values represented by the
LongVarBinary,
LongVarChar and
LongNVarChar link columns are 4000
bytes long or less, you should use
VarBinary,
VarChar and
NVarChar link columns instead, respectively, even when the
corresponding table columns are of the data types discussed in this section. When
you do that the connector will not enforce the Array size value of 1 which
will typically result in significant performance improvements. Also the guidelines
presented earlier for character and binary table column data types will apply in that case instead of the
guidelines presented in the remainder of this section.
When you configure the connector stage to fetch values from a table column
represented by a
LongVarBinary,
LongVarChar or
LongNVarChar link column, you have two choices for how the
connector should pass the fetched values for that column to the output link. You can
configure the stage to pass the values inline which means that each value is passed
in its original form to the output link. Alternatively, you can configure the stage
to pass the values by reference which means that a locator string representing the
location of each value is passed instead of the actual value. The actual value
represented by the locator string is then fetched by the last downstream stage in
the job that stores this value to its intended destination. The locator string
contains information that specifies how to retrieve the actual value from its
source.
By default, the Oracle Connector stage passes values inline. To pass the values by
reference you need to set the Enable LOB references connector property to
Yes, and then in the Columns for LOB references
connector property specify the names of the input link columns for which you wish
the connector to pass the values by reference. You can use the following guidelines
when deciding to pass values by reference or inline for the
LongVarBinary,
LongVarChar and
LongNVarChar link columns:
- Do the values only need to be passed through the job or do they also need to be interpreted (parsed or modified) by certain stages in the job? In the latter case, do not pass the values by reference because it is not the values that are passed in that case but the locator strings that represent them.
- Which stage in the job is the consumer of the values? Only stages of connector-based stage types such as WebSphere MQ Connector, DB2 Connector, ODBC Connector, Teradata Connector and Oracle Connector are capable of recognizing locator strings and resolving them to actual values. Stages of other stage types will treat locator strings as the actual field values. For example, you can use Oracle Connector stage to pass
CLOBvalue by reference to a DB2 Connector stage to store the value to a
CLOBcolumn in the DB2 database. But if you pass that value by reference to a Sequential File stage, the locator string will be stored in the target file and not the actual
CLOBvalue. You can use the latter scenario to check the size of the locator strings so that you can compare them to the average size of the actual values and estimate the savings in terms of the amount of data transferred through the job when the locator strings are passed instead of the actual values. Typically the locator strings will be less than 2000 characters in length.
- How large are the actual values? The larger the values are the more likely that passing them by reference will result in performance gains. As a rule of thumb values that measure about 100 kilobytes should be passed inline otherwise they should be passed by reference. Understanding the specifics of your environment in combination with the actual testing that you perform should help you with this decision. For example, if the Oracle Connector stage that reads the value from the database runs on a DataStage engine host which is different from the host in which the downstream stage consumes those values, then passing values by reference may provide performance benefits even for smaller actual values (as long as they are still larger than the locator strings) because they will be need to transferred over the network as they get exchanged between the player processes of the respective stages.
Note that DataStage imposes a limit on the size of records transferred between the
player processes in the job. The
APT_DEFAULT_TRANSPORT_BLOCK_SIZE
built-in DataStage environment variable can be used to control this limit and by
default it is set to
131072 bytes (128 kilobytes). For example, if you
pass link column values inline and the values consume hundreds of kilobytes then you
will need to override this environment variable and set it to a sufficiently large
value, otherwise you will receive error message at runtime indicating that the
record is too big to fit in a block. Set the value for this environment variable to
accommodate the largest record that will be passed through the job. Do not set it to
a value larger than necessary because the value will apply to all the player
connections on all the links in the job so the increase of this value may lead to a
significant increase of the memory usage in the job.
Oracle Connector stage handles values represented by
LongVarBinary,
LongVarChar and
LongNVarChar link columns as follows:
- For the read mode (fetching) it utilizes one of the following two Oracle mechanisms - piecewise fetch or LOB locators. The connector inspects the source table column data type and automatically selects the mechanism that is more suitable for that data type.
- For the bulk load write mode it utilizes the Oracle direct path interface when loading large values. Again the connector does this automatically and there are no additional options to consider.
- For the write modes other than bulk load it utilizes one of the following two Oracle mechanisms - piecewise inserts/updates or OCI LOB locators. By default it uses piecewise inserts/updates. To force the connector to use OCI LOB locators for some of the link columns, specify those link column names in a comma-separated list and set that list as the value of the
CC_ORA_LOB_LOCATOR_COLUMNSenvironment variable for the job. To use OCI locator string mechanism for all
LongVarBinary,
LongVarCharand
LongNVarCharlink columns, specify the special value
(all)for this environment variable (the parenthesis are included with the value). Note that for the link columns that represent
LONGand
LONG RAWtarget table columns the connector must be configured to use piecewise insert/update mechanism. Also note that for the link columns that represent
XMLTypetarget table columns the connector must be configured to use OCI locator string mechanism if the values are larger than 4000 bytes. Do not confuse LOB locators with the locator strings used when passing values by reference. The former is an Oracle mechanism for accessing large values in the database, the latter is a DataStage mechanism for propagating large values through the job.
The character set form and character set conversion considerations that affect the
performance and that were presented earlier in the
section which covered
CHAR,
VARCHAR2,
NCHAR
and
NVARCHAR2 table column data types apply here as well. Simply assume
the use of
LongVarChar and
LongNVarChar link columns
instead of the
VarChar and
NVarChar link columns as you
take those considerations into account.
As we saw in this section, to populate target table with records that contain
LongVarBinary,
LongVarChar and
LongNVarChar link columns you have three different options: bulk
load mode, insert mode with piecewise insert mechanism and insert mode with OCI LOB
locator mechanism. The three modes are available when the values arrive to the stage
as inline values as well as when they arrive as locator strings (values passed by
reference). Typically, the bulk load mode will have the best performance, followed
by the insert mode with piecewise insert mechanism and then the insert mode with OCI
LOB locator mechanism. Depending on the specifics of your environment this may not
always be the case and one way to determine which option provides the best
performance is to run a series of tests and compare the results. The exception is
the case when the target table has columns of
XMLType data type in
which case bulk load mode should not be considered if
BINARY XML or
OBJECT RELATIONAL storage type (as opposed to
CLOB
storage type) is used for those
XMLType columns or if the table is
defined as a
XMLType object table.
When fetching, inserting or updating
XMLType values, utilize the Oracle
XMLType package functions
GETCLOBVAL,
GETBLOBVAL and
CREATEXML. Refer to the Resources section for the link to the Information Server
Information Center where you can find example statements for accessing XMLType
columns with the connector.
Conclusion
In this article we covered various aspects of performance tuning of the Oracle Connector stage in DataStage jobs. We discussed running Oracle Connector in bulk load mode taking into account data volume considerations, handling table triggers, indexes, and constraints. We also looked at reject link considerations for handling input records that have errors. Finally, we went over how the data types for link columns can affect the connector performance.
In combination with the information listed in the Resources section and the information you collected through your own research, this article should help you achieve optimal performance of Oracle Connector stages in your DataStage jobs.
Acknowledgments
Many thanks to the following contributors for their valuable input and review of this article:
- Paul Stanley, InfoSphere Senior Architect
- Tony Curcio, InfoSphere Product Manager
- Fayaz Adam, InfoSphere Staff Software Engineer
Resources
Learn
- Use an RSS feed to request notification for the upcoming articles in this series. (Find out more about RSS feeds of developerWorks content.)
- Explore the IBM InfoSphere Information Server Version 9.1 Information Center where you will find technical documentation for Information Server tools and components such as DataStage and Oracle Connector.
- Visit IBM Information Management Redbooks page and download redbook titles such as InfoSphere DataStage Parallel Framework Standard Practices and IBM InfoSphere DataStage Data Flow and Job Design.
- Find details about Oracle Database concepts mentioned in this article in the Oracle Database Documentation Library 11g Release 2 (11.2).
- Access a variety of technical resources for Information Management products in developerWorks Information Management zone.
- Stay up to date with IBM products and IT industry by attending developerWorks Events.
- Follow developerWorks on Twitter.
Get products and technologies
- Evaluate IBM software products in a way most convenient for you: by downloading trial product versions, by trying products online for few hours in a SOA sandbox environment or by accessing products in a cloud computing environment.
Discuss
- Connect with other developerWorks users in developerWorks Community as you explore developer oriented blogs, forums, groups, wikis and more.. | http://www.ibm.com/developerworks/data/library/techarticle/dm-1304datastageoracleconnector2/index.html | CC-MAIN-2014-52 | refinedweb | 14,444 | 52.33 |
Getting .RUN error "Activity RUN is invalid--activity STRT needs to be executed (003381)" after succesfully running .STRT
Hello All,
I'm having problems getting a short python Dynamic script to run. Please see the code below:
import os,sys import psse34 import psspy import redirect # Redirect output from PSSE to Python: redirect.psse2py() # Last case: psspy.psseinit() CASE = r"Planning Coordinator provided dynamic .SAV case" ierr = psspy.case(CASE) print("case error: {}".format(ierr)) # Convert loads (3 step process): psspy.conl(-1,1,1) psspy.conl(-1,1,2,[0,0],[100,0,0,100]) psspy.conl(-1,1,3) # Convert generators: psspy.cong() # Solve for dynamics psspy.ordr() psspy.fact() psspy.tysl() # Save converted case case_root = os.path.splitext(CASE)[0] psspy.save(case_root + "_C.sav") ierr = psspy.rstr(r"Planning Coordinator provided snapshot with channels for this case ") print("rstr error: {}".format(ierr)) # Initialize ierr = psspy.strt(outfile=r"C:\Users\u45738\OneDrive\Projects\Dynamics\test_out_PC_2024.out") print("strt error: {}".format(ierr)) ierr = psspy.run(tpause=0) print ierr # 3-phase fault on 101 to 102 # 3 line psspy.dist_branch_fault(ibus= 101,jbus= 102,id='3') # Run to 24 cycles time = 24.0/60.0 psspy.run(tpause=time) # Clear fault psspy.dist_clear_fault() psspy.dist_branch_trip(ibus=101,jbus=102,id='3') time = 20 psspy.run(tpause=time)
This error occurs even when starting from the planning coordinator provided .cnv file. I have error outputs at various stages and am getting "0" for the .STRT command. This is a large case (115000 buses) but I have seen cases this size ran before in my previous position.
Thanks for your help!
EDIT I was able to get these models to run by changing to the updated .STRT_2 API and using "net machine power" for OPTION(2). Thanks again for all of your answers, they were very helpful in solving this issue.
Show also the output in the progress window! | https://psspy.org/psse-help-forum/question/6304/getting-run-error-activity-run-is-invalid-activity-strt-needs-to-be-executed-003381-after-succesfully-running-strt/?answer=6312 | CC-MAIN-2022-40 | refinedweb | 320 | 54.49 |
Dave Yeo <daveryeo at telus.net> writes: > libavformat now has a dependency on libavutil causing this error on a libavformat has depended on libavutil for as long as libavutil has existed. Nothing has changed there. > static build (similar error building a shared libavformat) > [...] > R:\tmp\ldconv_libavformat_s_a_74454c088fcb1ebd70.lib(utils.obj) : > error LNK2029: "_ff_toupper4" : unresolved external > R:\tmp\ldconv_libavformat_s_a_74454c088fcb1ebd70.lib(utils.obj) : > error LNK2029: "_av_get_codec_tag_string" : unresolved external > > There were 2 errors detected > make: *** [ffmpeg_g.exe] Error 1 What is the exact command that fails? Run "make V=1" and paste the last command along with the full output from it. > Fix is to rearrange the build order and linking as in the attached > patch or any other order where libavutil comes before libavformat. For > shared builds we'd still need to pass something like LDFLAGS=-Lavutil > -lavutil. > Dave > > Index: common.mak > =================================================================== > --- common.mak (revision 23463) > +++ common.mak (working copy) > @@ -31,7 +31,7 @@ > $(eval INSTALL = @$(call ECHO,INSTALL,$$(^:$(SRC_DIR)/%=%)); $(INSTALL)) > endif > > -ALLFFLIBS = avcodec avdevice avfilter avformat avutil postproc swscale > +ALLFFLIBS = avutil avcodec avdevice avfilter avformat postproc swscale > > CPPFLAGS := -I$(BUILD_ROOT_REL) -I$(SRC_PATH) $(CPPFLAGS) > CFLAGS += $(ECFLAGS) That patch has no effect at all. You must be confused. -- M?ns Rullg?rd mans at mansr.com | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-June/090962.html | CC-MAIN-2017-17 | refinedweb | 202 | 51.14 |
Learn to convert an InputStream to a String using
BufferedReader,
Scanner or
IOUtils classes. Reading a String from InputStream is very common requirement in several type of applications where we have to read the data from network stream or from file system to do some operation on it.
Table of Contents 1. InputStream to String using Guava 2. BufferedReader 3. IOUtils 4. java.util.Scanner
1. InputStream to String using Google Guava IO
Guava library has some very useful classes and methods to perform IO operations. These classes hide all complexities, otherwise exposed.
1.1. Dependencies
Maven dependency for Google Guava.
<dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>26.0-jre</version> </dependency>
1.2. ByteSource class
ByteSource represents a readable source of bytes, such as a file. It has utility methods that are typically implemented by opening a stream, doing something and finally closing the stream that was opened.
Its
asCharSource(charset) method decodes the bytes read from a source as characters in the given Charset. It returns the characters as String as the method output.
Example 1: Java program to convert InputStream to String
package com.howtodoinjava.demo; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import com.google.common.base.Charsets; import com.google.common.io.ByteSource; public class Main { public static void main(String[] args) throws Exception { InputStream inputStream = new FileInputStream(new File("C:/temp/test.txt")); ByteSource byteSource = new ByteSource() { @Override public InputStream openStream() throws IOException { return inputStream; } }; String text = byteSource.asCharSource(Charsets.UTF_8).read(); System.out.println(text); } }
1.2. CharStreams
The
CharStreams class also provides utility methods for working with character streams. Using
InputStreamReader along with
CharStreams helps in converting an
InputStream to a
String.
Example 2: Converting an InputStream to a String with InputStreamReader
Java program to convert InputStream to String with CharStreams class in Google guava library.
import java.io.File; import java.io.FileInputStream; import java.io.InputStream; import java.io.InputStreamReader; import java.io.Reader; import com.google.common.io.CharStreams; public class Main { public static void main(String[] args) throws Exception { InputStream inputStream = new FileInputStream(new File("C:/temp/test.txt")); String text = null; try (final Reader reader = new InputStreamReader(inputStream)) { text = CharStreams.toString(reader); } System.out.println(text); } }
2. InputStream to String with BufferedReader
Using BufferedReader is most easy and popular way to read a file into String. It helps to read the file as inputstream and process it line by line.
Example 3: How to convert an InputStream to a string in Java
package com.howtodoinjava.demo.io; import java.io.BufferedReader; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; public class ReadStreamIntoStringUsingReader { public static void main(String[] args) throws FileNotFoundException, IOException { InputStream in = new FileInputStream(new File("C:/temp/test.txt")); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); StringBuilder out = new StringBuilder(); String line; while ((line = reader.readLine()) != null) { out.append(line); } System.out.println(out.toString()); //Prints the string content read from input stream reader.close(); } }
3. IOUtils – Apache Commons IO
Apache commons has a very useful class IOUtils to read file content into String. It makes code a lot cleaner and easy to read. It provide a better performance too.
Use either of the two methods-
- IOUtils.copy()
- IOUtils.toString()
Example 4: Reading FileInputStream to String
package com.howtodoinjava.demo.io; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.io.StringWriter; import org.apache.commons.io.IOUtils; public class ReadStreamIntoStringUsingIOUtils { public static void main(String[] args) throws FileNotFoundException, IOException { //Method 1 IOUtils.copy() StringWriter writer = new StringWriter(); IOUtils.copy(new FileInputStream(new File("C:/temp/test.txt")), writer, "UTF-8"); String theString = writer.toString(); System.out.println(theString); //Method 2 IOUtils.toString() String theString2 = IOUtils.toString(new FileInputStream(new File("C:/temp/test.txt")), "UTF-8"); System.out.println(theString2); } }
4. Java InputStream to String using Scanner
Using Scanner class is not so popular, but it works. The reason it works is because
Scanner iterates over tokens in the stream, and in this process, we can separate tokens using the “beginning of the input boundary” thus giving us only one token for the entire contents of the stream.
Example 5: Java convert FileInputStream to String
package com.howtodoinjava.demo.io; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; public class ReadStreamIntoStringUsingScanner { @SuppressWarnings("resource") public static void main(String[] args) throws FileNotFoundException, IOException { FileInputStream fin = new FileInputStream(new File("C:/temp/test.txt")); java.util.Scanner scanner = new java.util.Scanner(fin,"UTF-8").useDelimiter("\A"); String theString = scanner.hasNext() ? scanner.next() : ""; System.out.println(theString); scanner.close(); } }
That’s all. The purpose of this post is to provide quick links for the very specific purpose i.e. to read inputstream into string.
Happy Learning !!
Feedback, Discussion and Comments
praveen
Hi I have the data in the notepad like below:
[VERSION_NUMBER]
version_number = 1.0.0
[MULTIBACK]
BACK= 44C0E0B36BD26.JPG
BACK2= 0E0B36BD2C.JPG
[VIOLATOR_HEADER_1]
num = 0
Total = 1
Possible = 0
So i have get all the data and store in the string but I need to select any one field and edit that field how can do that please let me know
vicky
Hey , I am trying to read the data from InputStream and want to make it as a string so how can i do that.
I have read somewhere we can use JsonReader but in tomcat server i can’t able to process Json Request because its showing me Caused by: java.lang.ClassNotFoundException: org.glassfish.json.JsonProviderImpl
Ariel Malka
There is a bug in your implementation of 1). You need to insert a new-line character for each line read, otherwise the resulting string will be incorrect. HTH
chandana
Sir i have a string
String s= “B55627&827,1101111”;
I need to read this string as follows :-
int a = Integer.parseInt(s.substring(0,4));
int b = Integer.parseInt(s.substring(4,8));
int c = Integer.parseInt(s.substring(8,11));
int d = Integer.parseInt(s.substring(11, 13))
int e = Integer.parseInt(s.substring(13, 15));
int f = Integer.parseInt(s.substring(15, 17));
But i am only reading the first line not able to read the next lines at all.
What should I do sir
Lokesh Gupta
Hi, because code is throwing
java.lang.NumberFormatExceptionin first line itself. “B556” is not a number.
chandana
Thank you sir
chandana
Sir can you give me the code as how to extract TCP packet sent from the server.
Lokesh Gupta
I have no prior experience of working on socket programming, till date. So You will need to research on your own or ask your question over stackoverflow. I did a quick google search and comes up with this SO thread.
Vibz
I have main and class code but I am getting confused where to put the code for reading a stream of data. In class I have the code that has data separately. I need to put all those data in a stream and read that as a string. I have created in public class type. can you guide me as how should I proceed further.
Lokesh Gupta
Can you please post the code you have in hand, and then ask exactly where you are facing problem.
Vibz
I have received TCP packet through codewarrior software in C language. I need to capture that packet and segregate into certain bytes as header is of 4 bytes and data is of 3 bytes each and unpacket it to get the original information / data. Please provide the code to do so
Vibz
I need a code to read a stream of data and run as string.
Lokesh Gupta
You may get help from this post:
mauroprogram
please you can explain me the following line of code?
java.util.Scanner scanner = new java.util.Scanner(fin,”UTF-8″).useDelimiter(“A”);
String theString = scanner.hasNext() ? scanner.next() : “”;
What it is the delimiter”A”
and what it is a caracter of escape?
Tank you very much
mauro
pravnviji
Is it good to test the performance of scanner ? . If yes, then compare to most common way method. This scanner takes much time to read the file. If i am wrong please correct. Your resources are so helpful
long startTime = Calendar.getInstance().getTimeInMillis();
FileInputStream fin = new FileInputStream(new File(“/Users/administrator/Desktop/code.txt”));
java.util.Scanner scanner = new java.util.Scanner(fin,”UTF-8″).useDelimiter(“A”);
String theString = scanner.hasNext() ? scanner.next() : “”;
scanner.close();
System.out.println(“Scanner performance ” + (Calendar.getInstance().getTimeInMillis() – startTime));
RamNaresh
Can you post java io package interface/classes hierarchy diagrammatical.
Lokesh Gupta
I will try to find time | https://howtodoinjava.com/java/io/inputstream-to-string/ | CC-MAIN-2020-45 | refinedweb | 1,468 | 52.97 |
Thanks for the input. I ended up putting the info I needed into the url <dtml-var "someurl?a=blah"> I put it in a method that is usuble from a couple of forms and it seems to work. Thanks again. -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Kapil Thangavelu Sent: Tuesday, August 15, 2000 3:44 PM To: Stuart Foster Cc: Zope List Subject: Re: [Zope] How to use RESPONSE.redirect ? Stuart Foster Wrote: > I want to use redirect to call another form passing the current form, how > can I do that. > > <dtml-call RESPONSE.redirect('otherform'+?????)> > Hi Stuart, i ran into the same problem a little while ago. i was trying to pass the user around to the proper display&process page(with inputs inplace) after a logic page that determined where they should go based on their inputs. IMO, The crux of the problem is that Zope as a web development platform should include the urlparse lib from the python core more over this problem and others like it should be remedied i believe by a standard method of extending the modules in the _ namespace with thread safe modules that a developer deems nesc. OK enough soap box... i ended up reimplementing the nesc. functionality in a python method and created another method to implement complete form insertion in much the same the style that of some ACS(arsdigita) utiltiy methods do. here they are, usage examples are included in the code. of course this solution requires evan simpson's python methods product be installed on your zope. Cheers Kapil <method 1 name: url_encode_form_vars args: namespace <code> # depends on url_encode_vars try: vars=namespace['REQUEST'].form method = namespace.getitem('url_encode_vars', 0) return method(vars) except: pass #### #example call to above #<dtml-call "RESPONSE.redirect(URL1+'?'+url_encode_form_vars(_))"> </code> </method 1> <method 2> name: url_encode_vars args: <code> ''' Code straight from urllib minor changes to get around assignment to sub_scripts, access to string module, and namepace issues expects a dictionary of key value pairs to be encoded example call <dtml-call "RESPONSE.redirect(URL1+'?'+url_encode_vars({vars={'squishy':1, 'bad_input':'&user=root'}) )"> ''' global always_safe, quote, quote_plus always_safe = _.string.letters + _.string.digits + '_,.-' def quote(s, safe = '/'): global always_safe safe = always_safe + safe res = [] for c in s: if c not in safe: res.append('%%%02x'%ord(c)) else: res.append(c) return _.string.joinfields(res, '') def quote_plus(s, safe='/'): global quote if ' ' in s: res = [] # replaec ' ' with '+' l = _.string.split(s, ' ') for i in l: res.append(quote(i, safe)) return _.string.join(res, '+') else: return quote(s, safe) def urlencode(dict): global quote_plus l = [] for k, v in dict.items(): k = quote_plus(str(k)) v = quote_plus(str(v)) l.append(k + '=' + v) return _.string.join(l, '&') return urlencode(vars) </code> </method 2> _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
- [Zope] How to use RESPONSE.redirect ? Stuart Foster
- Re: [Zope] How to use RESPONSE.redirect ? Jonothan Farr
- Re: [Zope] How to use RESPONSE.redirect ? Peter Bengtsson
- Re: [Zope] How to use RESPONSE.redirect ? Curtis Maloney
- RE: [Zope] How to use RESPONSE.redirect ? Stuart Foster
- Re: [Zope] How to use RESPONSE.redirect ? Kapil Thangavelu
- Re: [Zope] How to use RESPONSE.redirect ? R. David Murray
- Re: [Zope] How to use RESPONSE.redirect ? Kapil Thangavelu
- Stuart Foster | https://www.mail-archive.com/zope@zope.org/msg06255.html | CC-MAIN-2018-05 | refinedweb | 559 | 66.74 |
Logging Best Practices
Ray Saltrelli
・7 min read
When running an application in a local development environment, the de facto tactic to see what it’s doing is, of course, to attach a debugger. You can step through the code line by line and inspect every variable along the way. It’s a no-brainer. But what about higher environments like QA, Staging or Production where you cannot attach a debugger? How do we gain insight into what an application is doing in these environments? One word: Log!
Log Levels
Logs aren’t just for capturing errors. A robust logging strategy makes thorough use of all of the different log levels your logging framework has to offer. As a .NET developer, I've used several different logging frameworks over the course of my career, each of which offers the same six log levels, but the naming differs a bit between them. For consistency's sake, we’re going to use the names of the Microsoft log levels in this article.
Best Practices
It’s very common for the only logging in an application to be
Error logs from within try/catch blocks. While this is a good thing to do, if that’s all you’re doing, then you’re going to have very limited visibility into the execution of your application once it leaves your local development environment. Consider making better use of the other log levels according to the best practices described below.
Set the Current Log Level Via External Configuration
Setting the current log level should always be an operational task; never hard coded. External configurations allow operators to toggle verbose logging on demand via environment variables, configuration files, etc.
Default to the Information Log Level
When choosing a global default log level (e.g. in
web.config or
appsettings.json) for your application,
Information is usually a good choice. This means that
Critical,
Error,
Warning and
Information log entries will be written while
Debug and
Trace log entries will be suppressed. You might then want to provide a different default log level per environment (e.g. in
web.{env}.config or
appsettings.{env}.json); perhaps
Trace in Development and
Debug in QA.
Log Catastrophic Failures Using the Critical Log Level
As developers, we're are usually pretty good about logging run-of-the-mill errors: catch exception, log an error message, carry on. But catastrophic failures are a different animal.
A situation might arise where an application encounters an unrecoverable error during its start-up routine and throws an exception. You can catch the exception, log it using the
Critical log level instead of
Error and exit cleanly. In this flow, the log entry should make it out of the application since the application decided to end its own life as oppose to dying in a sudden accident.
Other catastrophic failures aren’t so easily handled. Maybe an exception is thrown during startup that is outside of your reach or Kubernetes decides to kill a container from the outside due to it hitting a memory limit. In the former case, any log entries that might be written are like people trying to escape a burning building; some might make it out before the building collapses on them, others might not. In the latter case, the application itself is unaware of the failure. One millisecond it is healthy and executing and the next millisecond its entire execution environment has been nuked. There isn’t the opportunity to even attempt to write a log entry. Be aware of these potential cases and work with your operations team to diagnose them when they occur.
Use the Warning and Information Log Levels Judiciously
Critical and
Error are pretty easy to use appropriately (i.e. when things go off the rails) but
Warning and
Information are not so clear.
Warning should be reserved for logs that are not errors—they don’t impede execution—but are not entirely normal behavior. Examples might be a cache miss on an expensive object that really should have been in the cache, a piece of code that completed but took longer than expected, a failed login attempt, or an access control violation. The key though is that the warning should be actionable. You shouldn’t clutter the logs with warnings that you don’t intend to do anything about. Only use them if you plan to do something if/when that warning appears consistently in the logs.
Information should be used when the application is executing normally but you want to communicate something to the operator. The most common of these might be application life-cycle events, like
Application started or
Application stopped. In the absence of a monitoring tool (like New Relic, AppDynamics or DataDog) it might be appropriate to log application events at the
Information log level, but if you have these tools, you should consider writing application events there rather than to your logs.
Don’t use either of these log levels so often that your application is writing copious logs when things are going smoothly. A healthy application’s logs should be mostly silent; perhaps only infrequent
Information log entries to confirm that it is still there.
Step Through Code Using the Debug Log Level
When your application is misbehaving is when you’ll want the ability to get enhanced visibility into what it’s doing. Your first inclination might be to try to reproduce the bug in a local development environment with a debugger attached, but this isn’t always possible. You might not know enough about the bug to reproduce it locally. It might be unique to the configuration or data of a particular environment. “It works on my machine.” Right? So what do we do in these cases? Well, if you’ve made good use of the
Debug log level when you wrote your code, a simple configuration change can get you more detailed logging from the environment where the issue is occurring.
But what does “good use of the
Debug log level” look like? To answer that, lets think about how we use a debugger. When you’re debugging an application, the first thing you might do is put a bunch of strategically placed breakpoints in the code to see which ones are hit, in which order, how many times, etc. You’d normally do this ad hoc after a bug has been discovered, but what if you did it proactively and methodically during development? You could log a
Debug entry at each place you’d potentially put a breakpoint with your debugger. Later, when your application misbehaves in a higher environment, you can ask operations to switch the log level to
Debug, see the log entries, and follow along in the source code to see what your application is doing out in the wild. Think of it like leaving a map for your future self.
These log entries should just be mile markers so you can identify which lines of code your application is executing. Don’t put any data, like the values of variables and parameters, in these logs. That should be reserved for the
Trace log level.
Inspect Variables Using the Trace Log Level
Trace is the most detailed log level and should only be turned on as a last resort. When stepping through the code with the
Debug log level isn’t enough, use
Trace to write the values of variables, parameters and settings to the logs for inspection. This mimics the behavior of inspecting the contents of variables with the debugger attached. It should only be done as a last resort for two reasons: 1) if you’re being thorough, this will drastically increase the amount of log data your application will produce, which has storage, cost and performance implications and 2) there could be sensitive data in the log entries written using this log level (more about this in the next section).
Never Log Personal Identifying Information or Secrets
Never—and I really mean never—write personal identifying information, credentials or other secrets to any log level (including
Trace). Seriously, don’t do it!
Personal Identifying Information
Regarding PII, writing this information to logs makes your life that much harder when dealing with GDPR, CCPA and whatever other privacy laws pop up in the future. You don’t want to have to worry about purging PII from your logs because your log retention is greater than the 30-day deadline for complying with GDPR requests.
PII you should avoid logging includes, but is not limited to:
- Username
- Gender
- Birthday
- Mailing/Billing Addresses
- Phone Numbers
- Social Security Number
- Credit Card Numbers
Secrets
Regarding secrets, writing this information to logs is a big security faux pas. Logs are often openly available to a broad audience in a technology organization so anything written there is visible to lots of people. This means that if anything shows up in the logs that isn’t meant to be broadcast to everyone, the admin of that system has to change it in order to maintain security.
Secrets you should avoid logging include, but are not limited to:
- Usernames
- Passwords
- API Keys
- Encryption Keys
- Connection Strings
- Magic URLs
Example
public class Goku { private readonly ILogger _logger; public Goku(ILogger<Goku> logger) { _logger = logger; } public int PowerUp(int desiredPower) { _logger.LogDebug($"BEGIN {MethodInfo.GetCurrentMethod()}"); _logger.LogTrace("desiredPower = {desiredPower}", desiredPower); try { var power = 0; var warned = false; for (; power < desiredPower; power += 100) { _logger.LogDebug("Charging ..."); _logger.LogTrace("power = {power}", power); _logger.LogTrace("warned = {warned}", warned); if (power > 9000 && !warned) { _logger.LogWarning("It's over 9000!"); warned = true; } } _logger.LogDebug("Power up complete."); _logger.LogTrace("power = {power}", power); } catch(Exception ex) { _logger.LogError(ex, "Failed to complete power up."); } _logger.LogDebug($"END {MethodInfo.GetCurrentMethod()}"); return power; } }
Great article! As knowing which log level to use can be tricky (and not an exact science 😅) at some point my team decided to limit to only 2 levels:
debugand
errorwith
errorlogs being sent via Slack/Email to make sure they don't go silent.
Most people think of logging as text based, but structured logging to a database is incredibly useful. It allows you to track down specific instances of something failing by searching with parameters you suspect.
You can also save a lot of space by simply storing the hash of log message template, instead of the merged string. | https://dev.to/raysaltrelli/logging-best-practices-obo | CC-MAIN-2019-47 | refinedweb | 1,726 | 52.49 |
We are now planning features for NetBeans 6.9, which will be probably the next version of our IDE. There are tons of things, which can be done in editors and other areas that are needed for the web development. From the PHP developer point of view we plan mainly improvements in the PHP editor and create a support for Zend Framework and also Cake PHP framework.
My goal now is to evaluate possible features that can help in the web developer workflow. I would like to ask you, what could help you to make your web developer life easier. Forget on editor features, debugger or a framework support. These are different areas.
I'm mainly interested in features like an integration NetBeans with Firefox - Firebug. For example I use Firebug for "tuning" css and html, because it allows to change the code directly in Firefox and see the result instantly. Unfortunately there is no way how to save these changes back to the sources. If there will be a plugin, which can communicate with NetBeans, then there is bigger chance to save these changes. Sure it can not work in 100% cases, but still can be useful.
Also I have heard a requirement about refreshing a page in Firefox on save in NetBeans. I'm not sure how much people want this, because in this case there should be a mapping between the URL and the file. Or sometimes our users complain that always new tab is open, when they run project or a file. With such integration it would be possible that NetBeans will be able to work only with one tab in Firefox. Or integrate few features from YSlow directly in NetBeans.
Another thing is compressing CSS and JavaScript files for production site. Do you compress CSS and JavaScript files? If yes, how NetBeans could help you with this task?
I don't want to express here everything what is in my head. It would be better to here it from you. What will be useful for you? Please, mentioned the things, which can significantly simplify the workflow and save your time.
Thanks for your time.
Hello,
I m interested by the compressed of js and css but i use svn for publication so i don't think it s util for me.
But good idea , a preview for rendering web page it a beautiful idea too ;)
;)
"Also I have heard a requirement about refreshing a page in Firefox on save in NetBeans."
Agree, this would be usefull, having Firefox running on second monitor.
"only with one tab in Firefox. Or integrate few features from YSlow directly in NetBeans."
this one is interesting, but what exactly can be done?
"Another thing is compressing CSS and JavaScript files for production site. Do you compress CSS and JavaScript files? If yes, how NetBeans could help you with this task?"
Yes, this would be nice feature. Google offers their technology for compressing.
I'd be interested in some WSDL features for code generation; both ways - i.e. creating a WSDL file form a class, and creating a class that consumes a WSDL file; although that might be a bit too "IDE" feature and not the out of the box comments you listed.
Very much looking forward to Zend Framework support.
Support for Selenium would be great too - built into netbeans (like selenium is built into firefox plugin) and be able to launch and control selenium (to different browsers?!) all from netbeans would be brilliant.
I'd love to see Zend Framework support, I don't know why symfony took its place.
Two simple improvements:
- in HTML editor classes and ids completion (based on project css files)
- another thing in HTML and CSS - images browser, for example in img tag.
JS-compression: what's the point in having another obfuscator/compressor as there are so many around already?
.
"Also I have heard a requirement about refreshing a page in Firefox on save in NetBeans. I'm not sure how much people want this, because in this case there should be a mapping between the URL and the file."
Exactly! Usually there is no such mapping. It's a lot of work for little gain. I suggest to drop the idea.
@James Selenium support is already there. Just check the plug-in repository. However: perhaps Selenium support should be reviewed! The default class name is in Portuguese (?), it doesn't follow the styling rules plus it has a missing unreported dependency to Log4J and presents users with an error message after installation.
You have to read the Wiki to find out you manually need to install Pear, PHPUnit and Selenium first - it won't do that for you or warn you if you forgot to do so.
Also: once you set up the Selenium test directory you cannot change the directory anymore. If you selected the wrong directory, you need to create a new project.
You may want to talk with Kai Seidler [1]. He and his XAMPP friends have a lot of experience with AMP.
[1]
Improved XDebug integration would be nice... doesn't really work well at the moment.
Regarding the issue of compression. Already it is easy enough to find compressed versions of Javacript libraries. However, something built-in to Netbeans that could compress CSS would be useful to me (example:). One thing I often find is that after working on a very large project with a number of developers/designers our various CSS files can get quite large so finding CSS selectors that are not being used would be helpful.
Additionally, something that could strip out various types of comments would be good (I have my own script that does this but having it built-in would be nice.)
Also I always want something to let me test/generate Regular Expressions.
Improve the Nebeans CSS and HTML palletes (they are limited... really Dreamweaver handles this quite well.)
So...
1. compress CSS
2. find unused CSS selectors (finding CSS selectors based on selected dom elements would also be nice)
3. strip comments
4. regular expression tester/generator
5. Improved CSS and HTML palletes (see Dreamweaver)
I would like to see support for git. Almost all of my projects have moved to git.
I'd also like to have the code completion come up automatically, not triggered by a key combo.
Various stability and memory improvements as well for mac - it still doesn't feel as solid as I'd like.
@Mike "I'd also like to have the code completion come up automatically, not triggered by a key combo."
I strongly support this idea! Thumbs up for that..
"create a support for Zend Framework and also Cake PHP framework."
Since I don't use those, I currently don't care. I'm running on Yana Framework since several years now.
Currently I have to write my XML and Smarty files in PSPad all the time, since NetBeans has no working highlighting or code-completion for Smarty and no code-templates for my XML files.
However: support for Yana could be easily integrated.
All you need is just a hand full of auto-complete templates for XML files and Smarty.
I could write those templates and send them to you, if you like.
This should quite instantly bring the Yana support, which is available in PSPad and ConTEXT, to NetBeans. It's easy to do and would improve my coding experience a lot..)
Currently I run this script in a cron-job during our nightly builds. So I always got the latest PHP function list including all the latest PHP extensions.
Full GIT support is needed and SMARTY support, at least syntax highlighting (with configurable delimiters of course). Both are more important as all mentioned above
Instead of bundling a js/css "compressor" I think it'll be much better to integrate nicely with Ant and Phing, besides PHPUnit.
Another nice addition would be the support for Phar archives, which would certainly help to make it more popular. PHP really needs to move to phar files, specially now with PHP 5.3 and the bigger framework moving to real namespaces.
The debug start button should have an option to disable the opening of the browser. Often times I'm already at the correct URL in the browser and just need to tell Netbeans to listen for debugging connections.
One more thing quite important is to improve xdebug integration. I feel really envious every time I see a Java developer debug in Eclipse (haven't seen them with Netbeans). Most times in web development we might use var_dump() and similar to "debug" but every once in a while having a really powerful debugger makes the difference between fixing a bug in 5 minutes or wasting the whole day with it.
HI POST IN SPANIS / ESPAÑOL / CASTELLANO
TRANSLATED IN SPANISH
LINK:
INGLES
New ideas for the IDE Netbeans PHP
1) when we make the Debug from FireFox, you must enable it by POST, next page, or all pages.
Since currently only allows a debug of all the pages you are sailing.
The idea is to make it function = that the extension of ZEND
2) Having an option that could be optimized website.
Which is responsible for optimizing all pages that are on site to remove comments, javascript, css, html, php, compact or compress the code in fewer lines to lower the weight of the.
3) A complete environment which is already integrated with Apache, PHP, MySQL, Xdebug, all in one package more complete.
4) Power Wizard-create a profile with a template project, which has the directory structure of how each programmer to have on future projects. and that this project template can be backup for future installations.
5) Integrate the browser to the Zend Studio IDE and is
Greetings and this is my ides for the amount for the future Netbeans PHP.
ESPAÑOL / CASTELLANO
Nuevas ideas para la IDE
1) cuando hacemos el Debug desde FireFox, tiene que permitir hacerlo por POST, Pagina siguiente, o todas la pagina.
Ya que en la actualidad solo permite hacer un debug de todas la paginas que vamos navegando.
La idea es hacerla que funciones = que la extensión de ZEND
2) Que tenga una opción que podria ser optimizar website.
El cual se encarga de optimizar todas la pagina que están en el sitio quitando comentario, javascript, css, html, php, compactar o comprimir el código en menos lineas para bajar el peso de los archivos.
3) Un entorno completo donde ya este integrado con Apache, PHP, MySQL, XDebug, todo en un solo paquete mas completo.
4) Poder crear por asistente un perfil con un template de proyecto, donde tiene las estructura de los directorio como trabaja cada programador, para tener en los futuros proyecto. y que esto template de proyectos se puede backup para futuras instalaciones.
5) Integra el navegado a la IDE como tiene ZEND Studio
Saludos y esa son mi ides por el monto para el futuro de Netbeans PHP.
Support for HTML5
Smarty Support for 3
WYSIWYG preview features.
Integra FrameWork DooPHP, CodeIgniter, TinyMVC, Yii.
ORM Generation Wizard class.
IN ESPAÑOL / CASTELLANO:
Soporte para HTML5
Soporte para Smarty 3
Funciones de vista previa WYSIWYG.
Integra FrameWork DooPHP, Codeigniter, TinyMVC, Yii
Asistente para generar class ORM.
Generating class PHP, CRUD Application from a Database.
2. As you said - take firebug/firephp and bind them to the ide so when u run sites from netbean you can do more things with these tools. For example: save resutls and measure times between executions.
Thanks for asking us (=developers :)
Good luck.
Ido
My top 5 wishes for the next version
1) Improve performance
2) Improve scanning performance
3) Solve the scanning hell once and for ever
4) Make classpath scanning faster
5) The scanning should be a lot faster
It can be nice also Yii framework support
A fix for the scanning issue would be nice. Hans, have you tried the Scan-on-Deman plugin? Without it NetBeans would be unusable for me. would also like to have an option to quickly change file-encoding, and a way to see if NetBeans is treating af file like UTF8, ANSI or other.
But I guess these issues all belong in the "general editor features" category and not really what Petr was asking us about :-)
Hmm, regarding the Javascript compression, what I'd really love to see is full integration for the Dojo Toolkit.
With full integration I mean primarily three things:
\* Have all Dijit and Dojox widgets available in the palette (I can emulate this now via drag-and-drop from the editor, but I won't have a nice settings dialog).
\* Have code completion for custom Dojo tag attributes
\* Integrate custom build generation: Either manually choose the packages you want in the build, or have Netbeans analyze which packages are used in the project and then recommend a package list to include. When done, compile a custom build by clicking a button. Oh, the sweetness ;-)
Doctrine support please :).
1) Improve performance
2) Improve scanning performance
3) Solve the scanning hell once and for ever
4) Make classpath scanning faster
5) The scanning should be a lot faster
NetBeans is a memory hog now! Please fix that!
GIT SUPPORT!
Js/css compression: you have some symfony plugins to do that in the "framework side".
It would be cool to have a new projects tab more symfony oriented in place of files. Something to navigate more quickly between sform, model, module, plugins something like the navigator class but on all the project.
Another idea: a log viewer with different format like mysql apache symfony …
And the possibility to have syntax coloration on php in json or json in php and an optionnal possibility to choose yourself your editor like in eclipse (hope it's not a dirty word ;) )
Include favorites in project, Doctrine query auto completion like for SQL, keymap profile for DVORAK (bépo) users …
Make an C++ Qt4.5 port … ok ok I'm joking ;)
Sorry for my english everything is the fault of google translation ;) …
One really basic IDE feature is just to be able to right click on a file (or its tab) and select "Open the Containing Folder" of that file!
Fifth (re-re-re-second) the call for better performance. Especially memory usage.? You folks using ZF, I know what level of commitment you've \*had\* to have made to really use it - but can you really see \*any\* third-party supplier of a FREE TOOL investing the resources to support ZF \*properly\*?
Finally, look at what the other IDEs and editors out there do that get positive feedback on their forums. I keep trying NetBeans and a few other editors every so often, but I keep going back to ActiveState's Komodo. Give me enough reason not to, and then figure out how to turn me from a reasonably-satisfied \*user\* into a satisfied repeat \*customer\*. Your shareholders will \*love\* that.
Bling like CSS compression should be way, \*WAY\* down your list.!!!
If you want to improve performance just return to Visual Web Pack, with or without woodstock components.
To me as a backend developer the most important features to be implemented are:
git
doctrine
zend framework
phing
I'd also like to have:
file-encoding that can work with mixed encodings in a project (i have to use external editors all the time, thank god for notepad++)
better profiling and debugging support
better/smoother performance
- Seeing as how everyone's suggesting their favorite frameworks, how about support for FLOW3 and Fluid ( )
- Vastly improved performance
-.
- Using dark theme with white foreground text renders the "Find instances" pane unreadable due to white-on-beige writing. Again trivial but annoying.
- ...)
- Conditional breakpoints
And the big one....
1. Create a new Class against an existing Interface and create all necesarry methods, like the Getter and Setter Creating Option
2. Customising the creating of the Getter and Setter Methods
Zen Coding for HTML and CSS would be awesome. I have found and tested the plugin that is available for Netbeans on OS X, but it was lacking in functionality compared to what Zen Coding CAN do for you.
Having Zen Coding distributed with Netbeans, and not as a plugin you have to wrestle into Netbeans yourself, would be preferred.
Hi Petr,
1. Last days I found "Zen Coding: A Speedy Way To Write HTML/CSS Code" ( - look at screencast). This is excellent idea how to improve HTML coding. Full support for netbeans would be great idea..
@gawan I believe NetBeans already has most of the things needed to make just about any framework work.
- global include paths
- code-templates for auto-completion
Of course as a developer I would love to easily define new file types, write and add my own code-highlighter and functions for code-completion (see editors like PSPad, ConTEXT and others). I haven't done it, because the current way of doing it is crap. (same thing in Eclipse)
While other code-editors have a generic highlighter that takes just about any syntax file and highlights any user-defined content to whatever the developer wants, NetBeans and Eclipse both don't really have a convenient way to do the same thing.
However: while I do believe this is highly important, it might not be in the scope of development for the upcoming PHP features of NetBeans, since it's not a PHP feature at all.
Instead I suggest to merge the code-templates into the auto-completion dialog (instead of always having to type the whole shortcut and push <tab>, which is not convenient).
This way anybody could add user-defined functions in just about any syntax to the auto-completion dialog and mimic the look and feel of having "real" functions.
Also the Smarty highlighter might really help people out, since it's the most popular of all available template engines. Even most frameworks rely on Smarty, so this would also please developers of frameworks and writers of Smarty extensions alike.
A Sync-Button in the IDE to sync Projects Web-Directory with the content of the Server would be simple and great!
When i add a ten files or more to the Source, Netbeans won't copy them to the target. Sure i can copy them by hand or create an ant-file with a copy Target on that, but a simple Sync-Button would be great.
Hi,
Thanks for your great work and for asking us what we need.
As a CakePHP developper (as all developpers in my company), here is what we need :
- autocompletion based on Cakephp convetions ($uses, $components, etc ....)
- SimpleTest unit testing framework support
- Support of CakePHP console mode
Sébastien
I'm new as a cakephp developer, i'm using Netbeans IDE for java development and for working with php too, There are a lot of thigs that you can improve in netbeans for supporting cakephp framework, somethig useful like JSF in java, some advanced features to build interfaces with html and css kind of dreamweaver, but useful like netbeans style. As a beginner in cakephp dev, I think you can improve support for conventions, and console tasks even in a visual style like java web development in netbeans.:
And what about Nette framework?
I'd love to see the IDE load faster, it takes very long to load initially. I'm currently constantly leaving the IDE open for loading other files.
Also.. what happened to code completion of functions in >= Netbeans 6.7.x ?
What about allowing users to set Netbeans as the default editor for php, js, html, htm, ... in Windows?
Copying a filename in the context menu of a file?
Support for components like in VS + ASP.NET? The components for Netbeans + Java developing are also pretty advanced. Why isn't there anything like that within the Netbeans IDE for PHP? There is basic support for HTML tables.. but this is too basic! :)
When there is an instance/object of a class.. and the class itself is defined at the bottom of a file.. The code completion for methods within the instance/object doesn't work. I get 'No suggestions..'.
e.g.
$myObj = new phpObj();
$myObj->.. <-- there are no suggestions for methods here..
class phpObj { method1, 2, 3 .. }
That's it for now :p
Just a clarification on my early suggestion that you responded to:
"One really basic IDE feature is just to be able to right click on a file (or its tab) and select 'Open the Containing Folder' of that file!"
You provided a great valid answer but I wasn't specific enough. I meant being able to open the folder in Explorer (I'm a Windows 7 user). This was really useful in other IDE's I've used so that you don't have to always keep explorer open to do something with the actual file system, or hop to the folder above the code files etc.
In this moment, i think netbeans is the best ide for php and with these changes will be even better! I stay happy for this.
Zend Framework and Cake are the most usable frameworks for php.
parabéns pelo excelente trabalho!
Great to see your interest in feedback. I've been amazed how many people in the PHP community are starting to use Netbeans over Eclipse.
I don't find much use in css and js compression, as I think this is ideally handled by the server and not the ide (on projects of any size at least).
I would love to see Zend Framework support. Code completion does a good job already but any tighter level of integration would be a plus.
I would also back any performance improvements as others have mentioned.
Clearly some people might prefer having built in functions pop up automatically, it might be a nice option. Please make it an option though, since I'm sure it would hurt performance.
One thing that I've often wished for was tighter integration with the native file system. Like it would be great if you could right click on a file and have the option of "Open In Finder" (I use a mac).
Bringing in more refactoring and code generation options would be nice too. I use Java too and really enjoy the code generation for getters and setters, etc.
Finally it would be REALLY nice to be able to control-click on a method and have it pop up the php doc information (I think javascript does this).
Hopefully these aren't in 6.8, I haven't used it much just yet.
I used Zend Studio for years and I recently tried Netbeans, I shall not return to Zend!
The only things I need actually and that was great in Zend are :
- ftp browser without creating a project and download a local copy of files, just to edit quickly some files on differents ftp ... I tried the only plugin for Netbeans I found, but it's not really functionnal
- a "duplicate file" action on a right-click to a file in the projet explorer
- a "root files filter" for download action on a root of a project. If I need to download only a .htaccess file on the root for example, I have to wait that the download script finish to filter only files I need, and if you have a lot of folders and subfolders this could take a while ...
But really, the most important thing for me is the FTP explorer :)
Thx a lot
Tatane
I would like to see a full implementation of ZenCoding in Netbeans.
I'm still trying to convince my boss to give the idea of an IDE a try so my needs are much simpler.
We use OS X.
I want the ability to drag a file from Netbeans projects/files window out into the rest of OS X and have it behave like a normal file. (Eclipse can do this)
The most common usage for this is to drag a file from the editor over to my ftp window to upload the file to the live server. Or to drag it to a network drive to back it up or to drag it to an e-mail window to attach it.
---
I want the ability to drag a text file from the desktop to the netbeans icon and have it open without needing to set up a project.
We have almost 100 old sites going back 10 years that don't need full netbeans projects set up for them. We just need to be able to make quick edits and take advantage of netbeans auto-complete for basic html/javascript/php.
Maybe the file could create an optional temporary project when you drop it on netbeans icon based on the folder the file is in?
-----
I want the find and replace text boxes in the Replace window to allow multiple lines. I also want the abiliit to search for line returns or replace things with line returns. I want regular expressions to work across multiple-lines.
What if I want to find all occurences of 4 blank lines back to back to clean up a file.
What if I want to change all (li)List item text(/li) into
(li)
list item text
(/li)
Netbeans lets me replace LI with LI\\r (carriage return) but then it won's let me find LI\\r.
Every editor ever made should have a look at the Find/Replace features of BBEdit (OS X).
Among them: A list of useful regular expressions built into the replace window.
Ability to save your own regex to the list.
A list of recently searched for/replaced strings.
Find & Replace fields are multiple line.
Thanks for your useful info, I think it's a good topic.
For CSS and JS compression I use the YUI-compressor.
I created a small script to run from within Krusader: it writes a minified copy in the same directory.
It would be nice to have a customizable right click menu in the project or file window. I could then call an external program or script, like open file in external editor or compress script.
Furthermore I would prefer a feature to save and make shortcuts to replace patterns, like described here:
Now it's only saved for the session, and gone after restart.
Populating the CSS code completion with colours used in the current stylesheet would be a god-send (Most frequently used colour at the top, descending), currently I don't use NetBeans for CSS coding as it's too awkward, the preview window barely ever works, no colour picker that I can easily access etc. Instead I use TopStyle from Bradbury Software, which is definitely worth looking at for CSS Editing inspiration (Their colour picking system basically makes the whole product worthwhile!)
For me, the most important feature missing from NetBeans is a ftp/sftp explorer. On a collaborative project, I just can't download the whole code every time I want to edit a file so that I know I'm editing the last version...
It would be great if the editor automatically writes the import or require statements when you use a class or a method which is not declared in the same file.
And if you have written an import or require statement you should have all the methods and classes available.
Netbeans should then suggest these methodes and class and constants by auto completion.
My favourite is this issue:
I hope to see it in 6.9 :)
By the way, thanks for the grat 6.8 release.
Hello! Thanks for asking about this... Netbeans is a great editor, and I'm happy to see you guys taking it so seriously. 6.8 Rocks my socks off with its tremendous speed gains.
My workflow recommendations, from the point of a "Front End Developer" (Designer, CSS & HTMLer, OOP JavaScripter, Wordpress/Drupal/etc themer, Dabbler in PHP/Python/Ruby/etc):
1. Live CSS Preview, just like CSSEdit for Mac (as you mentioned in your article, through a Firefox or perhaps faster Chrome plugin). This speeds up my workflow by a ludicrous amount when doing the CSS portion of my work, especially with dynamic sites (and what sites aren't dynamic these days).
2. CSS/JavaScript Concatenation + Compression. I oftentimes have a bunch of javascript files I work on that perform different pieces of functionality (javascript framework, autocomplete plugin, cufon, calendar functionality), and want to first concatenate and then compress them together into one, big happy minified file as per the Yahoo Speed Recommendations. Otherwise I'd have to do all this by hand or build and Ant task (which works too).
3. Image Optimization, like Yahoo Smush it! Smushit! is just a wrapper for imagemagick, pngcrush, and a couple other apps. It would be cool if you could perform image optimization on a file or directory within a project, rather than exporting your images, running smush it, downloading the zip file, and then uncompressing it again into your directory and doing a bunch of renaming.
4. CSS Sprite Creator. For this one and #2, I would look at "jsLex" from RockStar Apps which is an Eclipse plugin, and does an amazing job at sprite creation and concatenation + compression. For sprite creation, it will merge the selected images together into a composite sprite, and then give you information on the position of each image, going so far as to generate the background-position CSS and HTML (as opposed to measuring out the pixels in photoshop from a sprite you create yourself). If one of the images changes, you can regenerate the master sprite quickly and easily (not so fun to do by hand). This has been a huge time saver for me, and is one of the remaining reasons I open up Eclipse. If you could combine the sprite creator with a super good image optimization tool, I would be in heaven.
5. Being able to group CSS properties in the nav would be sweet as well (refer to CSSEdit's @group). Might not be as workflow related, though.
These are my wishes! Thanks for taking the time to ask and for making such a badass product.
1. CODE COMPLETION
I would like to see like FY mentioned a better code completion for working with objects.
- If i first create an object and after that i define a class -> I get 'No suggestions..'
- If i have some class files and i create an object in another file, it often get 'No suggestions..'
- If i want to access a method from an object in an object - for example $test->getMyChildObject->getAChildFunction - i get 'No suggestions..'
- If i put an object in a session variable - for example $_SESSION['test']->getMyChildObject would be nice if then the code completion activates
2. REFACTORING
Better refactoring options would also be nice!
sorry for my english :-)
CONTEXT MENU in Projects window could be improved.
Two notepad++ features, which I miss here, are:
1. when I right click on a file, context menu pops up. In this menu there should be one item called 'standard menu', which would be windows context menu.
I miss this feature because in my windows context menu, i've 'open command prompt here'. Which is really handy for scripts and related tasks.
2. As it has been earlier mentioned, option 'open containing folder' is missing in context menu.
Yii framework support would be awesome.
Thank you NetBeans team for your great work (NetBeans user since V4.1)
I recommend support for Zen Coding. This would add remarkable additional efficiency to what is already a great tool.
It's a shame that an IDE like Netbeans doesn't have full zen-coding support. Please, help us increase our productivity.
One feature I would really like to see is some kind of graphical code explorer for PHP similar to nwire. I use Netbeans to code my PHP more than anything else and once in awhile I find myself yearning for a code explorer/sniffer to examine larger code sets.
1. Pick up return carriages on copy/paste. Doing extra work with regular expression is annoying and unnecessary, and still doesn't work most of the time. Again, I do a LOT of find/replace in my work, and this makes a huge difference in my time.
2. Make the find/replace a textarea rather than an input so I can work with multiple lines.
3. Make find/replace actually look in the text of multiple documents and not just the names of documents.
If these are already in place and I'm simply ignorant of them, help a brutha out!
Cheers,
Chris | https://blogs.oracle.com/netbeansphp/planning-features-for-netbeans-next | CC-MAIN-2021-04 | refinedweb | 5,460 | 71.04 |
I have a text file test.txt, with the following contents:
Thing 1. string
Thing 2. string
Thing 3. string
Thing 4. string
Thing 5. string
file = open("test.txt","r+")
started = False
beginning = 0 #start of the digits
done = False
num = 0
#building the number from digits
while not done:
next = file.read(1)
if ord(next) in range(48, 58): #ascii values of 0-9
started = True
num *= 10
num += int(next)
elif started: #has reached the end of the number
done = True
else: #has not reached the beginning of the number
beginning += 1
num += 1
file.seek(beginning,0)
file.write(str(num))
file = open("test.txt","a+")
file = open("test.txt","w+")
#file is assumed to be in r+ mode
def write(string, file, index = -1):
if index != -1:
file.seek(index, 0)
remainder = file.read()
file.seek(index)
file.write(remainder + string)
file.read()
file.write()
Unfortunately, what you want to do is not possible. This is a limitation at a lower level than Python, in the operating system. Neither the Unix nor the Windows file access API offers any way to insert new bytes in the middle of a file without overwriting the bytes that were already there.
Reading the rest of the file and rewriting it is the usual workaround. Actually, the usual workaround is to rewrite the entire file under a new name and then use
rename to move it back to the old name. On Unix, this accomplishes an atomic file update - unless the computer crashes, concurrent readers will see either the new file or the old file, not some hybrid. (Windows, sadly, still does not allow you to
rename over a name that already exists, so if you use this strategy you have to delete the old file first, opening an unavoidable race window where the file might appear not to exist at all.)
Yes, this is O(N), and yes, if you use the write-new-file-and-rename strategy it temporarily consumes scratch disk space equal to the size of the file (old or new, whichever is larger). That's just how it is.
I haven't thought about it enough to give you even a sketch of the code, but it should be possible to use context managers to wrap up the write-new-file-and-rename approach tidily. | https://codedump.io/share/5xOlGvkiw7kZ/1/is-there-a-straightforward-way-to-write-to-a-file-open-in-r-mode-without-overwriting-existing-bytes | CC-MAIN-2017-47 | refinedweb | 392 | 73.68 |
The market is driven by two emotions: greed and fear.
Have you ever heard that quote? It is quite popular in financial circles and there may just be some truth behind it. After all, when people, with short-term investments, think are going to lose a lot of money, many of them sell as fast as they can. When they think they can make money the same happens, they tend to buy as fast as they can. People are inclined to overreact to what they hear or read, specially if it’s on the news. All of these are facts, and they support a new way of making market estimations: paying attention to what people (investors) hear about the markets.
Using market-related data from social media and news feeds is not a recent idea. It has been applied for some years and the improvement in market estimations can be substantial. In this post, we will explore some techniques that allow us to analyze text data in order to predict market movements. They are part of a very exciting field of Machine Learning: Natural Language Processing. In particular, we will focus on dense text representation, that is: encoding text into relatively small vectors that retain all the useful information. Then, we can use such vectors to feed algorithms and teach them to find patterns hidden in the relationship between financial text and market movements.
First, we will describe some fundamental and well-tested techniques for text representation, later we will apply them to news articles talking about the American car company Tesla. We have chosen Tesla because they build, electric cars able to (kind of) drive themselves! But the choice was also driven (no pun intended) by the fact that Tesla has been on news headlines quite a lot, so we will have more data to play with.
Bag of words
This traditional NLP method takes advantage of word frequencies inside a document and across all documents in a corpus in order to find the most important words for each one. It does so by computing the Term Frequency – Inverse Document Frequency (TF-IDF) value for each word. There are several variations on the TF-IDF implementation, but the simplest formula to get the index of a word i for a document j is as follows:
$$w_{i j} = tf_{i j} * log{\frac{N}{df_i}}$$
Where \(w_{i j}\) is the TF-IDF of word i in document j; \(tf_{i j}\) is the number of occurrences of word i in document j; N is the number of documents and \(df_i\) is the number of documents containing word i at least once. As we can see. If a word appears many times in a document, its importance increases, but if it also appears in many other documents, its importance decreases. This favors words that appear many times but only in a few documents, they are supposed to give more information. As an example, we will show 4 documents, each one consisting of a single sentence. We will also compute the frequency matrix, which contains the term frequencies of each word in each document. Finally, we will show the TF-IDF of each word in each document. Note that a word will have a TF-IDF value for each document it appears on.
Figure 1: A bunch of docs.
Figure 2a (left): Frequency matrix (left) and Figure 2b (right) TF-IDF matrix.
Document #4 shows how TF-IDF differs from a simple word count. In Figure 2a we see that the word “a” appears twice in the document, that is the highest frequency in the document, one may think the most repeated word is the most important one, but the TF-IDF for that word (figure 2b) is lower than that of the word “window” for example. This happens because the word “a” appears in every single document, while the word “window” only appears in document #4, therefore “window” is more relevant than “a”, which is, indeed, true.
However, the bag of words algorithm present some deficiencies. For instance, it doesn’t take into account word order or the context. There is often a lot of meaning embedded in the context and a bag of words couldn’t care less about it. As an example, consider “I have to eat food” vs “I have food to eat”. To a bag of words, both sentences are one and the same. There is also a technical problem regarding data sparsity. Bag of words regards each word as a single dimension, therefore, the dimensionality of the problem is as big as the vocabulary size. This number is usually quite massive. With that many dimensions, the curse of dimensionality becomes a serious issue. In particular, when we aggregate the TF-IDF values for a document, we form a vector that we can use to compare different documents. Those vectors lay in a high dimensional space and they will be quite far apart, we are dealing with a sparse vector space.
While bags of words are very popular, they are not the only NLP technique. There are a wide array of methods and some of them address the issues that bag of words have. In this article, we will talk about word and sentence embeddings, a kind of techniques that allow us to encode text information in small and dense vector spaces. These vectors can also retain information about the context and word order.
Word embeddings
In his seminal paper, Mikolv et al (Efficient Estimation of Word Representations in Vector Space, 2013) proposed a technique to map words to dense vector representations, reducing the sparsity issue that plagues most NLP tasks. Those words embeddings are also able to encode semantic and context information, so, with different grammar but similar meaning, they end up closer together in the vector space. It is even possible to perform sensible vector arithmetic. A typical example for word vector arithmetic goes as follows: subtract the vector representing “man” from the one representing “king” and add the vector representing “woman”, you will get the female version of a king, also known as “queen”.
Mikolov presented two different techniques to achieve effective word embeddings:
Skipgram model:
This technique tries to maximize the following equation:
$$\frac{1}{T} \sum_{t=1}^T\sum_{-c \leq j \leq c, j \neq 0} log(w_{t+j}|w_t)$$
That is: the average log probability of the context words given the central word. As a graphical representation:
Figure 3: Skipgram architecture (left) and examples of using context words around central words (right).
As we can see in the figure, starting from the word “sat”, the model tries to predict the most probable context words, which in this example will be “the cat on the mat”. Now, if our training set has many words containing the same context, such as “the cat laid on the mat”, the model will learn similarity between these central words as well as the relationship between each central word and the context words. Even more, the task of predicting context words takes order on account, so time dependencies will be encoded in the embeddings as well.
The model has a single hidden unit with linear neurons, which are the word embedding itself and contain enough information to predict the context words for the central word they belong to.
Continuous bag of words (CBOW)
Figure 4: CBOW architecture.
This technique is the inverse Skipgram. Indeed, this time the inputs are the context words and the target is the central word. This means the hidden layer will contain enough information to predict the word for which it forms an embedding, starting only from its context words.
In practice, embeddings from both methods perform similarly. They outperform bag of words methods for text representation tasks as it is shown in the paper cited.
Note that both approaches receive words with a 1-of-n encoding (or One-Hot encoding, each word is a vector of zeros as long as the vocabulary size, and it has a value of 1 only on the entry corresponding to that word) and the output uses 1-of-n encoding as well. Those are sparse representations, the useful part is the hidden layer which is a dense vector containing the word embedding.
Doc2vec
The last technique we will review is an extension of word embeddings. In fact, the main author involved is, once again, the great Mikolov (Distributed Representations of Sentences and Documents, 2014). Doc2vec takes word embeddings one step further and creates document embeddings, vectors that contain encoded information about a whole document. Of course, this technique can be used to encode just sentences a very popular and effective setting.
Doc2vec has two different approaches, similarly to word embeddings:
Distributed Memory Model of Paragraph Vectors (PV-DM)
Figure 4: PV-DM architecture
This approach is similar to CBOW in the sense that we are using context words as inputs and a central word as a target. The addition results on a new vector input: the document embedding (or paragraph embedding). While the context words inputs are sparse (one-hot), the paragraph vector is dense and it is shared across all data points from the same document. In this way, for each document, the algorithm learns an embedding for each central word and it learns the paragraph vector at the same time.
Distributed Bag of Words version of Paragraph Vector (PV-DBOW)
Figure 5: PV-DBOW architecture.
For this approach, the input is just the document id. From the id, the algorithm must retrieve a sentence from it. This approach works better when the document consists of a single sentence since there won’t be ambiguity on what words the model must output.
In practice, PV-DM works better, although a combination of both methods works even better, and such a model is what the paper recommends.
Experiments on financial news
In the following section we are applying both word and document embeddings to Tesla financial news, we won’t consider bags of words this time around.
We have collected news including the keyword “tesla” from a single source, Reuters. The earliest article collected was published on the 11-11-2011, the total amount of articles is 3770. After collecting all the data, we have discarded the parts of each article not mentioning the keyword. This decision was made because the amount of information available is not enough to train a general-purpose Language Model, therefore we must train a specialized model that focuses only on information about the company. The total number of sentences is 13631, or if you spell it backward: 13631, you gotta love palindromic numbers.
Data Analysis
Exploring the data we discover it has some flaws. For example, one of the lowest returns (-6.17 normalized return) happened on the 13-01-2012 and there wasn’t any news about Tesla the previous days or even during that day or the following day. As a possible explanation is that in 2012 Tesla was not as big of a company as it is today, and news feeds didn’t pay too much attention. This data flaw could be mitigated by aggregating data from several sources.
We have decided to compare news from a given day with stock returns from the following day (stock returns from TESLA Inc). This approach is not always valid, we will illustrate this with examples:
On the 5-11-2013 there was a very low return (-4.64 normalized return) and the day before, Tesla appeared on the news with sentences such as ‘Tesla sales and profit forecasts disappoint’, text like that has good predictive power. Another example, this time for a positive return took place on the 8-5-2013 when Tesla had a big upward swing (7.68 normalized return) and the day before, the news was like “Tesla recent fortunes and growth prospects have surprised analysts suppliers and even Tesla executives”. On the other hand, some times, news that could predict a big swing is published the same day as the mentioned swing. As an example, on the 16-7-2013: “… shares of us electric carmaker Tesla Motors Inc tumbled overnight following a target price downgraded by Goldman Sachs Group…”. It seems like Reuters didn’t know this was going to happen as they only published news after the market swing had occurred. Once again, including several data sources could help with this issue.
Having imperfect data is not a rare phenomenon in Machine Learning. We will do our best to fit our models.
Applying embedding models
First of all, we will train a Skipgram model on our corpus. The resulting vectors will have a length of 64.
Now we show words with similar vectors according to the model. The similarity will be measured using the Cosine Distance metric.
- ‘growth’: ‘grow’, ‘shortages’, ‘grown’, ‘sustainability’, ‘shortage’
- ‘surprised’: ‘surprises’, ‘convinced’, ‘surprise’, ‘wed’, ‘surprising’
- ‘removed’: ‘removing’, ‘removes’, ‘moved’, ‘loved’, ‘misrepresented’
- ‘crash’: accident’, ‘crashed’, ‘crashing’, ‘fatal’
The model does a good job at finding words with a similar lexicon, but it has a hard time finding words with similar meaning but a different lexicon. However, it manages to do it for the word “crash”. We can also find some signs of overfitting. For example, it seems like “growth shortages” are a common theme in the corpus. Consequently, the word vector for “shortages” is very similar to the word vector for “grow”. As an additional remark: weddings are a surprising event to the model!
We can do it better than just looking at single vectors. We can perform vector addition and then, look for vectors closer to the resulting sum. Combined vectors tend to describe much better a news article. We will now combine words that make up highlights for some articles:
- ‘growth’ + ‘prospects’ + ‘surprised’: ‘sustained’, ‘surprises’, ‘bemoaned’, ‘concerned’, ‘delighted’
- ‘comission’ + ‘removed’ + ‘chief’: ‘removing’, ‘directed’, ‘collected’, ‘thundered’
Once again, we see overfitting signs. Indeed, our dataset is not that big and the combinations we tried are likely to appear in just one article, not enough frequency by any means for the model to learn the patterns without overfitting.
We also tried to fit a sentence embedding model, that is, applying the document embedding technique we described earlier and considering each sentence to be a document. Vectors will have a length of 128, twice the length we used with word embeddings. Data variance is higher now that we combine words together to form sentences, the model must learn more dependencies.
Let’s look for similar sentences to the one we target, that’s coming right up:
- ‘tesla automotive gross margins dropped in the quarter to <mediumNum> % from <mediumNum> %’
- ‘tesla recommend to a friend rating fell to <mediumNum> % in the first quarter from a high of <mediumNum> % two years prior the glassdoor data showed’
- ‘tesla leapt to fifth place from the <mediumNum> th spot during the first quarter’
- ‘tesla incinerated more than <bigNum> million of greenbacks in the quarter to june ‘
- ‘the securities and exchange commission is pushing to have elon musk removed as both chief executive and board member at the <mediumNum> billion electric’
- chief executive elon musk contemplated <mediumNum> billion take
- chief executive elon musk told employees on tuesday that the <mediumNum> billion electric
- shares jumped almost <mediumNum> percent after chief executive elon musk said on twitter he is considering taking the electric car maker private at <bigNum> per share
- ‘tesla sales and profit forecasts disappoint’:
- ‘electric car maker tesla motors inc forecast profit and sales below wall street estimates and reported third’
- ‘tesla reports q <smallNum> sales and production missed targets’
- ‘tesla posts wider loss highlights energy storage demand’
The model seems to find somehow related sentences but the similarity comes from very localized parts of the sentences. For instance, in the first example, all sentences involved information about quarter statistics with numbers quantifying the changes, but there are sentences including upward as well as downward movements, so the model didn’t get that decisive difference. In the second example, all sentences talk about “chief Musk” but apart from that, the sentences are very different from the target. The third example includes sentences about Tesla sales forecasts, this time around, sentences retrieved are indeed quite similar.
All these examples are suggesting we have a model that achieves decent text representations but misses some key differences relevant to the financial markets. It also overfits to some degree.
Using embeddings for return prediction
Just to get a general metric of the model, we will try to predict the next day return movements only from the current day news. We will perform the task using all our dataset, training on 80% of the data and testing on the remaining 20%. It must be said that using exclusively news data from the previous day and predicting returns out of that is quite the task, not easy by any means. For the experiment we will apply both embedding techniques:
- Word embeddings: An LSTM will receive all word vectors from all articles published on the previous day and it will output a prediction for the current day return.
- Sentence embeddings: An LSTM will receive all sentence vectors from all articles published on the previous day and it will output a prediction for the current day return.
The LSTM will have 16 hidden units and dropout of 0.5 to avoid overfitting. Connected to the LSTM, there will be a Dense layer with 8 units and a dropout of 0.4. Other models were tried but they resulted in overfitting or underfitting.
The following figures plot the predictions for the training and validation datasets using either word2vec or doc2vec. As we can see, the model does a good job with the training set but fails with the validation set. When we applied stronger overfitting (dropout > 0.5, l1_l2 regularization) the results on the training set got worse while the validation results didn’t improve. This was expected to some point due to the deficiencies shown by the word representation models. If the vector representations are not encoding the information properly, then we can not use them to predict returns.
Figures 6a, 6b, 6c and 6d (top to bottom): Predicted returns vs real returns for training and validations sets and using both methods: Doc2Vec and Word2Vec
For completion, we show another metric, the signed accuracy, which is nothing more than the success ratio of trying to guess if the returns will be positive or negative. This is an easier task, unfortunately the models struggle with it as well.
Figure 7: Sign accuracy for Word2vec and Doc2vec. Train and test scores displayed.
Final notes
To wrap up, the author kindly sums up some recommendations for brave data scientists willing to give NLP a try:
- Learning text representations first, then using them for market prediction is perhaps too much too ask due to the high variance found in texts. Your model may learn some representation that just so happen to be not so useful from the financial perspective. The solution is to train the embeddings on labeled data. Instead of just giving raw text to the algorithm, you can label each sentence with financial information, for example the returns themselves. Just make sure you are splitting your train and test set beforehand.
- Use many different data sources to avoid bias or general weakness coming from a single source.
- Be willing to predict in shorter periods than a day. News information often comes hours before the market swing happens. Investors are usually fast at adapting to recent events and you are competing against them.
If you find the post useful or entertaining kindly let me know so I can celebrate. If otherwise, you have some complaints, I will be happy to receive your feedback and hopefully get a valuable lesson out of it. Finally, if you have performed similar experiments with better results, tell me about it so I can be properly jealous of you.
Hope you have a good day.
\(\) | https://quantdare.com/encoding-financial-texts-into-dense-representations/ | CC-MAIN-2019-43 | refinedweb | 3,326 | 58.62 |
# Full motion video with digital audio on the classic 8-bit game console
Back in 2016 an United States based music composer and performer [Sergio Elisondo](https://www.youtube.com/user/sergioelisondo) released an one-man band music album [A Winner Is You](https://sergioelisondo.bandcamp.com/album/a-winner-is-you) (know your [meme](https://knowyourmeme.com/memes/a-winner-is-you)), with multi-instrumental cover versions of tunes from numerous memorable classic NES games. A special feature of this release has been its version released in the NES cartridge format that would run on a classic unmodified console and play digitized audio of the full album, instead of the typical chiptune sound you would expect to come from this humble console. I was involved with the software development part of this project.
This year Sergio makes a return with a brand new music release. This time it is all original music album [You Are Error](https://www.kickstarter.com/projects/youareerror/you-are-error-video-and-audio-from-your-nes/), heavily influenced with the video game music aesthetics. It also comes with a special extra. This time we have raised the stakes, and a new NES cartridge release includes not only the digitized audio, but full motion videos for each song, done in the silhouette cutout style similar to the famous [Bad Apple](https://www.youtube.com/watch?v=FtutLA63Cp8) video. Yet again, this project is crowdfunded via Kickstarter. It already got the asked amount in a mere 7 hours, but there is still a little time to jump on the bandwagon and get yourself a copy. In the meantime I would like to share an insight on the technical side of both projects.
### Lots of ROM
So, how would one take an old console that has been released back in 1983, with the hardware that is likely originates from the even earlier times, that only has a handful of kilobytes of memory and mere megahertz of the 8-bit processing power, and make it to play digitized audio, let alone a full motion video?
Sure, it is possible to put a hardware MP3 decoding chip into the cartridge, or even better, a Raspberry Pi-like single board PC full of 32-bit power, and make it run the [classic Doom](https://www.youtube.com/watch?v=FzVN9kIUNxw). This is pretty cool, however, it raises a question of why the NES is even needed then. At least from a tech purist standpoint it would be difficult to say that the NES is capable of doing these miracles on its own. It sure looks pretty impressive regardless, especially to those who aren’t picky about technical details.
These considerations were among the reasons why we’re picked a bit more conventional approach — just a huge PRG ROM. For this purpose, a new custom cartridge board with 64MB (megabytes) ROM on board has been developed by RetroUSB. He also created a new custom mapper (the memory paging control device), very UNROM-like, just with 4096 16KB banks to pick from. This isn’t exactly authentic to the 80s era either, however, this way the actual audio and video playback is (barely) handled by the console all on its own.
The custom board with 64MB ROM.Development and debug
---------------------
Once the exact way we were going to do this has been decided, the problem of setting up a development toolchain has arisen. The new custom mapper was of course not supported in any emulator in existence, so to have something to begin with I had to modify the popular FCEUX emulator first. It features many nice functions, useful to debug, however, its emulation precision is not exactly at a high point. It was just enough for the first project, but for the second one I had to modify a couple of more precise emulators as well. None of those has been released to the public, as there were no ROM releases, and the mapper does not even have an iNES mapper number assigned to it.
Testing on the real hardware has been complicated with a quirky burning process — by some reason it takes nearly 4 hours to reflash the board. The lack of the board and the original console in my hands didn’t help, either. Thus Sergio was flashing my tests occasionally, and then I was doing some wild guesses based on the results and fixing it using an emulator.
I was using my regular NES development toolchain — the CA65 cross assembler of the CA65 package that allows to configure the binary output towards needs of an arbitrary 6502 powered system, the Notepad++ as code editor, Graphics Gale and GIMP to process the graphics, Wavosaur and Audacity to process the audio, VirtuaDub to process the video source, my own NES Screen Tool to prepare NES graphics, and some other tools. I also programmed custom converters for the audio and video data using Visual C++ Express. The amount of the data was pretty large, and the algorithms were bruteforce, so compiled C code was needed to do the processing faster, although for something smaller and simpler I would use Python, as it works pretty well for things like this. The compilation process has been automated with use of the regular Windows batch files and some Python scripts (my tastes are very singular).
Playing the audio
-----------------
The first project, A Winner is You, featured digitized audio only, without doing much besides this at the same time. That is, no animation has been displayed on the screen while playing the audio. This is relatively simple task code wise — we’re just fetching bytes from the ROM, switching the ROM banks as needed, and outputting the bytes to the APU DAC at a steady rate. This can be accomplished by nearly any home micro or a game console of the past, given that it has access to a ROM large enough, and a DAC.
The main difficulty of playing digital audio on such a simple machine like the NES is the lack of means to precisely synchronize the code with the real time such as high resolution timers and interrupts. The only time source is the CPU clock frequency and the number of clocks spent to execute each specific operation. This means that to provide a specific steady rate of outputting data to the DAC, the code needs to be carefully planned and timed very precisely. This, however, is a rather standard trick for such old systems, and it was perfected over the years, so it wasn’t a big deal. The NES CPU is fast enough to provide a theoretical maximum sampling rate of ~74 kHz for NTSC and ~69 kHz in PAL. Considering the amount of available ROM and the amount of the audio data to be allocated, the more traditional sample rate of 44100 Hz has been picked. The NES APU DAC has 7-bit resolution, so resulting sound quality is along lines of the original 8-bit Sound Blaster that featured a slightly better 8-bit DAC, but at a twice lower sample rate.
The player shell allowed the user to seek through a track, fast forward and rewind it with sound speeding up and reversing, just like the compact cassettes of the past did, as well as a pause and slow down. The player also supports both NTSC and PAL modes providing correct pitch and tempo. It has been implemented via a handful versions of the sound loop code that had different timings. When the user presses a button, an icon gets changed on the screen. To update the graphics, the audio stops for a moment, however, as the resulting action is always creating some change in the sound, this gap is not noticeable by the ear.
To illustrate all of the above here is a glimpse of the regular speed sound loop. It plays a number of 256-sample packets and features the NTSC timings. The number of CPU clocks is specified for each operation in the comments.
```
;1789773/44100=40t (fCpu/fSampleRate)
playLoopNTSC:
lda (BANK_OFFSET),y ;5+ fetching a byte from a ROM bank
sta APU_DMC_RAW ;4 sending it to the DAC
sta LAST_DMC_RAW ;4 also storing for later use in the player
nop ;2 the common delay in both code paths
nop ;2
nop ;2
nop ;2
nop ;2
nop ;2
iny ;2 increase pointer LSB
beq :+ ;2/3+ go the MSB increment path on overflow
nop ;2 extra delay for the shorter code path
nop ;2
nop ;2
nop ;2
nop ;2
jmp playLoopNTSC ;3=40t take the loop
:
inc
```
This code expects that a 256-byte data block won’t ever cross the ROM bank boundary. The banks get switched in the outer loop. Thanks to the mapper design, it is done in a very efficient way:
```
ldy #0 ;zero offset
sta (BANK_NUMBER),y ;switch the bank
```
The word variable BANK\_NUMBER stores the number of the currently used ROM bank. The bank's numbers are in the range of $8000..$8FFF, that is, 0..4095 with the most significant bit set. Mapper watches for any writes to the ROM addresses and translates them into bank selection, by latching the lowest 12 bits of the address into the bank selection register. The actual content of a write does not matter, just the address used.
Showing the video
-----------------
Performing any actions alongside playing a digitized audio, especially a full motion animation, is a much more ambitious task for the humble old NES. Not only the timings of the code has to be set very precisely to provide a steady sample rate, but the access to the video memory is only possible in certain times, namely the vertical blanking period. The video memory also can only be accessed indirectly, though a single-byte communication port with serial access.
The extra challenge is provided by the video system design of the NES. Basically it is a text mode with some hardware sprites on the top. The background layer can only display 256 unique characters (patterns) from a character set that is stored in the ROM, or gets loaded into character RAM (if it is installed to the cartridge board). Whenever the graphics of any particular character get changed, it changes everywhere on the screen at once. Thus the basic setting or clearing a mere pixel in an arbitrary location of the screen, and keeping this change for all subsequent frames, which is common on the raster buffer based systems, is a challenge on the NES. Also, to create animation on the NES it is often needed to update a few sets of data at once — the pattern data, the nametable (character map), and the color attributes area.
All of this turns even a simple blitting of the uncompressed video frames into the video memory into a tricky challenge. This is the reason that my video stream format lacks any interframe compression, which is otherwise very common for the video data from the dawn of time.
Kinda compressed
----------------
Nevertheless, my video stream format features a kind of intraframe compression, a lossy one even. Its purpose is not to just reduce the amount of the data, but to reduce the number of unique characters in a video frame. It is important, as only so many characters can be uploaded into the video memory during the access time in a single TV frame.
This “compression”, or rather a character set optimization, has been implemented in my tools for a while, and it often comes very handy for my NES projects. The premise is very basic — it seeks for two most visually similar characters in a set (the main difficulty is to pick a criteria of similarity), remove one of the characters by replacing it with the other, and repeat the process until the desired number of the characters in the set is reached. This approach provides major visual artifacts, the more prominent with increase of the “compression” ratio, and it does not do a good job for images with fine details, quickly turning it into noise.
The necessity of employing such a compression technique led to the stylistic choice for the actual video content — we’ve picked the silhouette style video along the lines of the famous Bad Apple video, which also featured this style. In fact, this video has become a kind of benchmark for video playback projects for retro computers, and I was using it during development as a test to my video encoder as well.
The pictures below demonstrate the optimization process in action — the source picture is made of 960 unique characters, then it gets reduced to 256, 128 and 64 unique characters.
The original unoptimized picture.Optimized to 256 unique characters.Optimized to 128 unique characters.Optimized to 64 unique characters.Precisely timed
---------------
The regular vertical blanking period, when the video memory can be accessed, lasts for about 2300 CPU clocks per TV frame, that’s 22 scan lines, 113.6 clocks each. In order to use this very limited time most effectively, an unrolled loop has to be used. It may be done in a few ways. This code that uses absolute addressing mode for source data will allow to transfer about 300 bytes in the given time:
```
lda SRC ;4
sta PPU_ADDR ;4 - 8 clocks per byte
```
Faster copying is possible, but it comes with limitations and difficulties. Here is a couple of mid-optimal solutions that would buffer the data into the zero page or stack page of the RAM:
```
lda
```
The fastest way of copying data to the VRAM considers storing the data directly inside the immediate load opcodes, via use of the self-modifying code:
```
lda #NN ;2
sta PPU_ADDR ;4 - 6 clocks per byte
```
This trick allows a transfer of nearly 400 bytes per standard vertical blanking period of a TV frame. However, it takes 5 bytes of code to transfer one byte of data, and 400\*5 = 2000 bytes of code, nearly the whole amount of the RAM that is available for the NES. That’s why the further calculations consider the least optimal approach with 8 clocks per byte.
In order to make a full update of the screen, which includes the whole character set and nametable, 5 kilobytes of data has to be transferred to the video memory. Considering the numbers above, it would take 5120/300 = 17 TV frames, resulting in the video frame rate of 60/17 = 3.5 frames per second.
To increase the throughput, the blanking time needs to be extended somehow. It is possible to do by forcing the blanking in some scanlines of the visible raster. Those will be displayed as a solid background color then. Using the 8-clock copying, about 14 bytes can be transferred in each extra blanking scanline. However, even if the whole screen is blanked, it will only allow to transfer less than 4KB per TV frame.
So many factors to be considered requires to find a balance between the amount of the data to be transferred, the number of scanlines to be displayed or blanked, and the number of TV frames spent to perform a full frame update, to provide a smooth frame rate in the video animation.
The commonly accepted minimal frame rate to maintain an illusion of the motion is considered to be 12-18 frames per second. It should be also considered that besides the video memory updates, the player code has to constantly maintain audio data fetching and outputting through the DAC on evenly and steadily spread time intervals, which are a few dozen of CPU clock long. This means that not the whole blanking period is actually available to access the video memory, it is partially spent to play the audio, too.
After many experiments this balance has been found:
* 256x160 pixels resolution (32x20 characters)
* 4 colors with a per frame palette
* 212 unique characters per frame
* 15 frames per second for NTSC, 12.5 frames per second for PAL
* Sample rate of 27360 Hz for NTSC, 25450 Hz for PAL
There were experiments with multi-palette conversion, too, by using 4 separate palettes and color attributes to increase the max totals of colors per frame to 13. However, it proved to be very tricky at the image conversion stage — the low resolution of the color attributes didn’t allow to find an approach that would split the image into areas with smooth transition between sub palettes. As it was not clear how to create such an algorithm, and if it is possible to do at all, it has been decided to stick to the basic 4 colors and just colorize some of the video sections into a sepia.
The full screen update in the resulting player takes four TV frames both in NTSC and PAL. The difference in video frame rate and audio sample rate is created with the difference of the TV frame rates (60/4=15, 50/4=12.5) and the difference in the main CPU clock frequency. The audio sample rate is defined by the fact that a sample gets outputted to the DAC every 64 CPU clocks; this number remains the same in both versions.
The video memory contents after the first TV frame.The contents after the second TV frame.After the third TV frame.After the fourth TV frame.The resulting video frame on the screen.The data format
---------------
In order to make the seeking (fast forwarding and rewind) through a video stream easier, the data size of a single video frame has been set to a fixed value, 8K per frame. Having this number, it is also easy to calculate how much video content can fit into the available ROM. It takes 120K per second, so a 64MB board can fit about 546 seconds of video, i.e. almost 10 minutes.
Each TV frame in a video frame is presented with a 2K packet that consists of 8 256-byte blocks. The data layout in the packets is very tricky, so it is hard to describe even for me now. The number of factors affected the creation of such a messed up layout:
* The data has to be read from the ROM in the middle of the visible raster, and has to be transferred to the destination both in the bottom part of the current TV frame and in the top part of the next TV frame.
* The code has to be very optimized, so the data is located in a way to allow the easiest access to a particular data piece at a given time.
* The format was changing and tweaked up all the time. The PAL support has been introduced lately, so it was easier to keep some of the preceding versions layout to avoid introducing even more changes in the already debugged code.
Each of the 2048-byte packets contain:
* 456 bytes of the NTSC audio (456\*60=27360 Hz)
* 509 bytes of the PAL audio (509\*50=25450 Hz)
The first three 2048-byte packets also contain:
* 1024 bytes of the character data (64 characters)
The last packet stores different video data:
* 320 bytes of the character data (20 characters)
* 13 bytes of the color palette
* 640 bytes of the nametable data
* 48 bytes of the color attributes data
In order to make the same video data useable both in NTSC and PAL, which is important to avoid duplication of the whole video stream, the PAL mode player skips some of the frames (3 of the 15 per second) — luckily it is rather easy to do without having the interframe compression. The audio data is stored in two versions, however, as it would complicate the code a lot otherwise. If the audio data would be presented with just a single version, the delays between the DAC outputs had to be different between PAL and NTSC, and it would take huge changes in the code that transfers the data into the video memory.
The code
--------
The video player code can’t simply copy the data from the ROM to DAC and the video memory. It is complicated with the way the mapper works. The NES architecture provides 32K of the address space for the ROM data. The mapper keeps the top 16K of these 32K fixed, i.e. maps a fixed ROM bank to this area, which contains the reset and NMI vectors, and such. The bottom 16K of the 32K is switchable, any of the 4096 ROM banks can be switched in there. As the usage of the unrolled loops is necessary to achieve the desired performance, the code size gets pretty large and has to be stored in the switchable ROM banks. However, this code needs to access the data that is also located in the switchable ROM banks, and as they both can’t be switched in, it makes the data inaccessible. To solve this issue, a buffering pipeline that squeezes the buffered data into the much limited 2K of the NES RAM has been employed.
The code can be split into two functional parts, the readers, and the pushers. These are called from different parts of the visible raster and in different TV frames from the main update loop. Double buffering is implemented, so the partially updated video frame is hidden from the screen, and only gets displayed once fully updated. To implement this, both PPU character sets and nametables have been used, this means that the pushers have to transfer data into different locations of the video memory at alternated video frames.
There are two readers, one for NTSC and another for PAL. A reader gets invoked during the active part of the raster, it fetches the data from a ROM bank and stores it in the RAM for further use. It also fetches the audio data and outputs it to the DAC right away, without putting it into a buffer. The buffered data contains character data, the nametable, and the audio data to be played in the other parts of the raster. As the reader needs to access a ROM bank, its code is located in the fixed ROM page that has a very limited space, so the reader's code is kept very universal and is not fully unrolled (six iterations unroled). In order to gain the fastest access to the different locations of the ROM bank, the self-modifying code is used, so it is placed into the RAM before execution.
The NTSC reader code is executed during the 160 visible scanlines, and uses up about 18176 CPU clocks. It reads 1536 bytes from a ROM bank, putting them to the RAM buffer, and plays 284 audio samples without buffering in the meantime. The PAL reader takes more time, 202 scanlines and 21568 clocks, even though it buffers the exact same amount of data. This is caused by the need to read and play 337 audio samples instead — the TV frame rate is lower, so more samples per frame to maintain a similar sample rate is needed. The extra 42 scanlines are located in the extended blanking period that is a specific feature of the PAL version of the NES.
The pushers duty is to transfer the buffered data from the RAM into the desired locations of the video memory as fast as possible. They’re fetching data from the buffer and streaming it to its locations — the audio data into the APU DAC every 64 clocks, the graphics into the video memory in the remaining time. There are 8 pushers, a couple per each of the four TV frames that is needed to perform a full video frame update. Each TV frame has a pusher that works in the top forced blanking half, and another that works in the bottom forced blanking half of the raster, the areas where video memory access is enabled. The code of both NTSC and PAL pushers is exactly the same.
The first of the top pushers sets up the palette, loads the nametable and color attributes, sets up a few extra bytes in the nametable to display the OSD icons. All the other pushers transfer different amounts of the character data to the video memory, 38 characters in all of the top pushers, 26 characters in all of the bottom pushers but the last, and the last one loads 20 more characters (26+38+26+38+26+38+20, 212 characters total).
.")A video frame without the character data loaded (a screenshot from an early prototype).The character data is partially loaded.A double-edged cart
-------------------
Besides the quite unusual contents in the NES realm, the new release also features another gimmick, a limited run of the double-edged carts with a connector on both sides. I pitched this idea as a joke first, but as they say, every joke has some truth.
A double-edged cart prototype.The reason here is that it was initially considered to use a 128MB version of the cartridge board, also courtesy of RetroUSB, and include the full album, so the content has been created for this large version. However, by some reason the player code that has been working just fine in the emulators and on the other boards (the older 64MB and in a separate small MMC3 test) just refused to work properly on the new board - it was glitching out, and the video playback just hang before ever starting. As at the moment the 128MB board is only accessible for its creator, and he wasn't able to figure out the issue so far, we had to take a decision of using the older smaller board and limit the release contents to just six songs. This brought the idea that the whole album would fit a couple of boards, then it turned into the idea of putting these boards into a single case and turning it into an extra feature of the project. | https://habr.com/ru/post/587944/ | null | null | 4,665 | 57.81 |
From: Arkadiy Vertleyb (vertleyb_at_[hidden])
Date: 2007-06-14 10:33:57
"Richard Day" <richardvday_at_[hidden]> wrote
> Basically if boost is striving for header only libraries when ever
> possible(And that is my impression)
I don't think this is 100% correct. In my impression Boost is a combinatiom
of libraries produced by different people (or different groups of people).
Each library author has his/her only preference regarding splitting the code
between headers and source files.
> should there be unnamed namespace's
> being used at all ?
The main problem of unnamed namespaces in headers is the possibility of ODR
violation. Has anyone seen any compiler complain about this? In Typeof
library we specifically implement test cases to cause ODR violation (because
we do use unnamed namespaces in headers; only since they are hidden inside
macro definitions, we don't get inspection complains). No compiler
complains of any ODR so far.
Now, I don't know if this is an appropriate topic here, but I would question
the usefullness of the ODR itself. My main problem with it -- it
contradicts to quite useful, IMO, idiom, where a library author defines a
main template in his/her library as a customization point, and the users
provide specializations of this template (similar to virtual functions in
runtime world).
Regards,
Arkadiy
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2007/06/123330.php | CC-MAIN-2020-29 | refinedweb | 240 | 55.44 |
This is your resource to discuss support topics with your peers, and learn from each other.
12-03-2009 11:00 PM
Anyone else? Curious if this work around makes any sense.
12-03-2009 11:53 PM
Hi, i didnt go thru ur code, but transparent manager wont give the popupscreen look and feel. It seems to be, a field added to the current screen, instead.
12-03-2009 11:56 PM
True, but can't one easily paint it to look like a popup?
12-04-2009 09:06 AM
I did some testing with tkroll's idea and it seems like it could work.
I also did some testing with BackgroundFactory for the 4.6+ builds of my application.
The problem I run into with both is that all I'm trying to accomplish is having a "busy" screen popup (as I said in my first post). For this I'm using an animated GIF. Using either tkroll's code or BackgroundFactory on newer OS gives me the transparent background. But, as the GIF animates, the border flashes. There is no border to the GIF file and I've tried using BorderFactory to set the border to transparent as well for 4.6+ OS. Neither seems to help. I still get a 1px border. And it doesn't shows around the entire GIF, only one side at a time. So, it might show only on the right border, then as the image animates, it will only show the bottom border, then the left, etc.
Anyone else run into this?
I don't want to get off topic too far, just wanted to check. tkroll's solution looks like it could work for all versions of the OS that I need, 4.5+. Anyone see any problems with this method?
03-02-2010 01:19 PM
Hey all,
So I've been reading through these forums for quite some time and I thought I should finally post something, since it might be helpful to some. I, too, was using some of this code to create a custom transparent popup with the default theme overriden. I'm also trying to keep a 4.5 target. It seems that devices that run on 4.5 like my 8320 and the simulators zany mentioned below all create a transparent popup as expected, but when you run the same code on a 4.6+ device like my 8900 or 9700 etc...you get an opaque popup with the white background. This is consistent with everything everyone else has mentioned on this thread, but I don't think everyone realized that the differences were between 4.5 devices and newer devices.
A suggestion in this thread is to take a screenshot of the active screen when you're pushing your "popup" and use that screenshot as the background for your "popup" screen class. I haven't tried it yet since I can't find any screenshot taking threads, but if it worked, at least you'd know it was universal between all devices....
zany wrote:
oh, i have tested this code in the following models' simulators but not in any real devices of 4.5
8100
8330
8300
8703e
since it was working fine for me in the simulators, i thought it should have same behavior in the real device but it is not
sorry
03-02-2010 01:45 PM
Actually, this post tells you how to take a screenshot. Not hard at all, you just need to make sure you have code signing keys from RIM since the Display class is part of the Controlled APIs that require signing.
online app form for signing keys:
03-02-2010 03:05 PM - edited 03-02-2010 03:13 PM
So after working with this for a while, I came up with something that works for me on both 8300 series phones and 8900 series phones (or what I'm assuming is an underlying issue between OS4.5 and OS4.6+ devices).
Using the information I got from the linked post and code from this thread, I came up with a custom popupscreen class that appears to be transparent.
import net.rim.device.api.system.Bitmap;
import net.rim.device.api.system.Display;
import net.rim.device.api.ui.Color;
import net.rim.device.api.ui.Field;
import net.rim.device.api.ui.FieldChangeListener;
import net.rim.device.api.ui.Graphics;
import net.rim.device.api.ui.Manager;
import net.rim.device.api.ui.XYRect;
import net.rim.device.api.ui.container.PopupScreen;
/*
* This class creates a custom transparent popupscreen with a semi-transparent rounded rectangle
* as a background of specified color. It takes in an existing Manager as the content of the popup.
*/
public class CustomPopup extends PopupScreen implements FieldChangeListener {
int color;
Bitmap backgroundBitmap;
private int _CUSTOM_WIDTH;
private int _CUSTOM_HEIGHT;
private int _X;
private int _Y;
protected void sublayout( int width, int height ) {
setExtent( _CUSTOM_WIDTH, _CUSTOM_HEIGHT );
// If you want to make it look like the popupscreen is glass that "distorts" the light, you can
// add a little offset of a pixel or two to the setPosition method below, so that the background of
// the popupscreen is painted a bit off from the actual screen below. Just a thought....
setPosition( _X, _Y );
layoutDelegate( _CUSTOM_WIDTH - 20, _CUSTOM_HEIGHT - 20 );
setPositionDelegate(10,10);
}
protected void applyTheme(){
}
public CustomPopup(Manager manager, int color) {
super(manager);
_CUSTOM_WIDTH = manager.getPreferredWidth() + 20;
_CUSTOM_HEIGHT = manager.getPreferredHeight() + 20;
backgroundBitmap = new Bitmap(_CUSTOM_WIDTH,_CUSTOM_HEIGHT);
_X = ( Display.getWidth() - _CUSTOM_WIDTH ) >> 1;
_Y = ( Display.getHeight() - _CUSTOM_HEIGHT ) >> 1;
// Take screenshot of active screen which will be used as our background later
Display.screenshot(backgroundBitmap, _X, _Y, _CUSTOM_WIDTH, _CUSTOM_HEIGHT);
this.color = color;
}
protected void paintBackground(Graphics g){
// Instead of trying to make the background transparent (which only work on OS4.5 devices and not OS4.6 devices for some reason),
// we'll just draw the screenshot bitmap to the background of our popup and make it appear that it's transparent
g.drawBitmap(0, 0, backgroundBitmap.getWidth(), backgroundBitmap.getHeight(), backgroundBitmap, 0, 0);
}
protected void paint(Graphics g){
XYRect myExtent = getExtent();
int alpha = g.getGlobalAlpha();
// Set transparency ~70%
g.setGlobalAlpha(0xB0);
g.setColor(this.color);
// Fill transparent rounded rectangle
g.fillRoundRect(0, 0, myExtent.width, myExtent.height,20,20);
g.setColor(Color.BLACK);
// Draw black rounded border
g.drawRoundRect(0, 0, myExtent.width, myExtent.height, 20, 20);
g.setGlobalAlpha(alpha);
// Draw the rest of the content from the manager specified in constructor
super.paint(g);
}
public void fieldChanged(Field arg0, int arg1) {
this.close();
}
}
I've tested it on my devices and it works perfectly now. Please note that in order to make the popupscreen appear transparent, a screenshot is being taken before the popup is pushed, and that is used as the background of the popup.
Also note that this is just a custom popup which takes in an existing manager defined elsewhere, and paints a rounded semi-transparent rectangle with a color of your choice. You can change the paint() method to have it paint whatever you want instead.
Well hope this helps, thanks for all the help that led up to this.
12-13-2010 02:44 AM
Hi ketithmay,
Thanks a lot for transparent popup.Your code works fine. Thanks you.. | https://supportforums.blackberry.com/t5/Java-Development/PopupScreen-with-transparent-background-and-theme-for-Storm/m-p/392546/highlight/true | CC-MAIN-2016-44 | refinedweb | 1,206 | 66.03 |
Why is my c++ program always asking to type
#include "stdafx.h" in the precompiled header??
This is a discussion on ?? within the C++ Programming forums, part of the General Programming Boards category; Why is my c++ program always asking to type #include "stdafx.h" in the precompiled header??...
Why is my c++ program always asking to type
#include "stdafx.h" in the precompiled header??
If you turn off precompiled headers in the IDE (compiler->settings), then it should stop.
I must admit that I'm not very familiar with term "Precompiled headers".
Can you explain it with a little more detail Salem?
Gotta love the "please fix this for me, but I'm not going to tell you which functions we're allowed to use" posts.
It's like teaching people to walk by first breaking their legs - muppet teachers! - Salem
If you are using Microsift Visual C++ latest versions, they need a pre-compileed header that contains functions and prototypes that work alongside the IDE you are using. If you create a program that uses muti-files, such as a large project, any files apart from header files you create, thus scource code files c.pp need the precompiled header in order for the compiler to understand certain aspects of the code. I do not know much more, but you could press F1 and go into the in depth help files and look that way.
A pre-compiled header is used to speed up compile times by combining commonly used headers into a single header (stdafx.h) that is "pre-compiled" once instead of having the common headers processed every tie they are included in another source file.
You don't need pre-compiled headers for small projects. If you do use them, you just have to add #include "stdafx.h" as the first include in all your source files. If you don't want to use them, then when you create your project in VC++, pick Win32 Console Application, and in the wizard make sure to check Empty Project. You might have to switch to a different page to find that option. If the project is already created, you'll have to find the option in the project settings or project properties. | http://cboard.cprogramming.com/cplusplus-programming/74755-a.html | CC-MAIN-2014-41 | refinedweb | 375 | 73.37 |
Java 17 and IntelliJ IDEA
A new Java release every six months can be exciting, overwhelming, or both. Given that Java 17 is also an LTS release, it’s not just the developers but enterprises also noticing it. If you have been waiting to move on from Java 8 or 11, now is the time to weigh its advantages.
In this blog post, I will limit the coverage of Java 17 to its language features – Sealed Classes and Pattern Matching for switch. I’ll cover what these features are, why you might need them, and how you can start using them in IntelliJ IDEA. I will also highlight how these features can reduce the cognitive complexity for developers. You can use this link for a comprehensive list of all the new Java 17 features.
Added as a standard Java language feature in Java 17, sealed classes enable you to control the hierarchies to model your business domain. Sealed classes decouple accessibility from extensibility. Now a visible class or interface doesn’t need to be implicitly extensible.
Pattern matching for switch is introduced as a preview feature. As the name suggests, it adds patterns to the case labels in the switch statements and switch expressions. The type of the selector expression that can be used with a switch is expanded to any reference value. Also, case labels are no longer limited to constant values. It also helps replace if-else statement chains with switch, improving code readability.
Let’s start with pattern matching.
Before we dive into pattern matching for switch, let’s ensure we have the basic IntelliJ IDEA configuration set up.
IntelliJ IDEA Configuration
Basic support for Java 17 is available in IntelliJ IDEA 2021.2.1. More support is on the way in future IntelliJ IDEA releases.
To use pattern matching for switch with Java 17, go to ProjectSettings | Project, set the Project SDK to 17 and set Project language level to ‘17 (Preview) – Pattern matching for switch’:
You can use any version of the JDK that has already been downloaded on your system, or download another version by clicking on ‘Edit’ and then selecting ‘Add SDK >’, followed by ‘Download JDK…’. You can choose the JDK version to download from a list of vendors.
On the modules tab, ensure the same language level is selected for the modules – 17 (Preview) – Pattern matching for switch:
Once you select this, you might see the following pop-up which informs you that IntelliJ IDEA might discontinue the support for the Java preview language features in its next versions. Since a preview feature is not permanent (yet), and it is possible that it could change (or even be dropped) in a future Java release.
Ok, now we are ready to start with the Java 17 language features.
Pattern matching for switch (a preview feature)
Pattern matching is a big topic and it is being rolled out in batches in the Java language. It started with pattern matching for instanceof (previewed in Java 14, and becoming a standard feature in Java 16). Pattern matching for switch is included in Java 17, and we are already looking at deconstructing records and arrays with record patterns and array patterns in Java 18.
To understand pattern matching for switch, it will be beneficial to have an understanding of:
- Pattern matching, in general
- Pattern matching for instanceof
- The enhancement of switch construct with Switch Expressions
If you are already familiar with all of the preceding topics, feel free to skip to the section ‘Welcome to pattern matching for switch’.
What is pattern matching?
Wikipedia states pattern matching is “the act of checking a given sequence of tokens for the presence of the constituents of some pattern”.
Let’s make it more specific to our examples. You can compare pattern matching to a test – a test that should be passed by a value (primitive or object) against a condition. For example, the following are valid pattern matching examples:
- Is the value an instance of class
String?
- Is the value a subclass of class
AirPollution, and the value returned by one of its methods, say,
getAQI()is > 200?
There are different types of patterns. In this blog post, I’ll cover type patterns, guarded patterns, and parenthesised patterns – since they are relevant to pattern matching for switch.
Pattern matching for instanceof uses type pattern. Let’s look at how it works.
Pattern matching for instanceof
This feature extends the
instanceof operator with the possibility to use a type pattern. It checks whether an instance is of a certain type. If the test passes, it casts and assigns the value to a pattern variable. This removes the need to define an additional variable, or for explicit casting, to use members of the instance being compared.
Here’s an example of code that can be commonly found in codebases (which doesn’t use patterns matching for instanceof):
void outputValueInUppercase(Object obj) { if (obj instanceof String) { String s = (String) obj; System.out.println(s.toUpperCase()); } }
In IntelliJ IDEA, you can invoke context-sensitive actions on the variable s (by using Alt+Enter or by clicking the light bulb icon) and selecting Replace ‘s’ with pattern variable to use pattern matching for instanceof:
The scope of the pattern variable (a local variable) is limited to the
if-block because it makes no sense to be able to access the pattern variable if the test fails.
The simplicity of pattern matching of instanceof might be deceptive. If you are thinking it doesn’t matter much since it only removes one line of code, think again. Removal of just one line of code can open up a number of possibilities in which you can modify your code. For example, aside from using pattern matching for instanceof, the following code merges
if statements, introduces a pattern variable, and replaces a for loop with
Collection.removeIf():
Now, let me brief you on the enhancements to the switch statement with the switch expressions (covered in detail here, with Java 12, and here with changes in Java 13). As I mentioned before, if you are already familiar with switch expressions, please feel free to jump to the section ‘Welcome to pattern matching for switch’.
Switch expressions – what benefits do they bring to the table?
Switch expressions enhance the switch statement and improve the coding experience for developers. As compared to the switch statements, switch expressions can return a value. The ability to define multiple constants with a switch branch, and the improved code semantics, makes it concise. By removing default fall-through the switch branches, you are less likely to introduce a logical error in a switch expression.
Let’s look at an example that demonstrates the advantages switch expressions can have over switch statements.
In the following code, the switch statement has repetitive break and assignment statements in case labels, which adds noise to the code. The default fall-through in switch branches can sneak in a logical error. For example, if we delete the break statement for case label
STRAW, it results in an assignment of 300 instead of 200 to the variable
damage when you call the method
getDamageToPlanet(), passing it the value
SingleUsePlastic.STRAW. Also, with switch statements there isn’t any way to exhaustively iterate over the finite enum values:
public class Planet { enum SingleUsePlastic { CUP, STRAW, BOTTLE } int getDamageToPlanet(SingleUsePlastic plastic) { int damage = -1; switch (plastic) { case CUP: damage = 100; break; case STRAW: damage = 200; break; case BOTTLE: damage = 300; break; } return damage; } }
Let’s see how switch expressions can help. The following gif demonstrates some of the benefits of switch expressions such as concise code, improved code semantics, no redundant break statements, exhaustive iteration, and more:
With a basic understanding of pattern matching, pattern matching for instanceof, and switch expressions, let’s look at what pattern matching is and why you need it?
Welcome to Pattern matching for switch
Imagine being able to replace long if-else statement chains with concise switch statements or expressions. Yes, you read that correctly. Pattern matching for switch applies to both switch statements and switch expressions.
If you are wondering about the limited types of selector expressions (integral primitives, namely
byte,
short,
char,
int, their corresponding wrapper classes,
String and enum) that could be earlier passed to switch, don’t worry. With pattern matching for switch, type of selector expression for a switch statement and switch expression has been increased to any reference value and integral primitive values (
byte,
short,
char, and
int).
Also, the case labels are no longer restricted to constants. They can define patterns – like type patterns, guarded patterns, and parenthesized patterns.
Let’s start with an example.
Replace if-else statement chains with concise switch constructs – that test types beyond int integrals, String, or enums.
You can work with switch constructs that can be passed a wide range of selector expressions, and can test values not just against constants but also types. That’s not all, case labels can also include complex conditions.
Let’s work with a set of unrelated classes –
AirPollution,
Discrimination, and
Deforestation. These classes represent things that harm our planet. To quantify the harm, each of these classes define methods that return an int value, like,
getAQI(),
damagingGenerations(), and
getTreeDamage(). The classes define minimal code to keep it simple:
class AirPollution { public int getAQI() { return 100; } } public class Discrimination { public int damagingGenerations() { return 2000; } } public class Deforestation { public int getTreeDamage() { return 300; } }
Imagine a class
MyEarth, with a method, say,
getDamage() that accepts a method parameter of type
Object. Depending on the type of the object passed to this method, it calls the relevant method on the method parameter to get a quantifiable number for the amount of harm it is causing to our planet:
public class MyEarth { int getDamage(Object obj) { int damage = 0; if (obj instanceof AirPollution) { final AirPollution airPollution = ((AirPollution) obj); damage = airPollution.getDamage(); } else if (obj instanceof Discrimination) { Discrimination discrimination = ((Discrimination) obj); damage = discrimination.damagingGenerations(); } else if (obj instanceof Deforestation) { Deforestation deforestation = ((Deforestation) obj); damage = deforestation.getTreeDamage(); } else { damage = -1; } return damage; } }
Let’s look at how we can use switch expressions and IntelliJ IDEA to make this code more concise:
Here’s the final (concise) code for reference:
public class MyEarth { int getDamage(Object obj) { return switch (obj) { case final AirPollution airPollution -> airPollution.getDamage(); case Discrimination discrimination -> discrimination.damagingGenerations(); case Deforestation deforestation -> deforestation.getTreeDamage(); case null, default -> -1; }; } }
The power of this construct lies in how often it helps to reduce the cognitive complexity in the code, as I discuss in the following section.
Reducing cognitive complexity with pattern matching for switch
An if-else statement chain seems complex to read and understand – each condition should be carefully read together with its then-and-else code blocks. If we consider the if statement chain from the preceding section, it can be represented roughly as follows:
Now let me represent the switch construct from the preceding section:
Even by looking at both these images, the switch logic (though similar) looks simpler to read and understand. An if statement chain seems to represent a long, complex path, in which the next turn seems to be unknown. But this isn’t the case with the switch construct.
Let’s look at other reasons for working with pattern matching for switch.
Yay! You can now handle nulls within a switch construct
Previously, switch constructs never allowed using null as a case label, even though it accepted instances of class
String and enumerations. Then how was it possible to test whether the reference variable you are switching over is not null?
One approach has been to add a
@NotNull annotation to the variable accepted by the switch construct. You can add this annotation to a method argument, a local variable, field, or static variable. Another approach (much widely used) has been to check if the variable is not null by using an if condition.
Of course, if you do not explicitly check for null values and the selector expression is null, it throws a
NullPointerExpression. For backward compatibility,
null selector expression won’t match the default label.
Now, you can define null as one of the valid case labels – so that you can define what to do if the selector expression is null.
Does IntelliJ IDEA convert your if-statement to a switch expression or a switch statement?
In the preceding example, the if-else construct was converted to a switch expression. However, if you’d have selected this conversion, before using pattern matching for instanceof, you would have got a switch statement, as shown in the following gif:
Since the code block for if-else in the original code snippet defined multiple lines of code, it made sense to convert it to a switch statement rather than a switch expression.
This brings us to another interesting question – what is the relation between switch statement, switch expression, colon syntax, and arrow syntax? Let’s have a look.
Switch statements vs. Switch expressions and Colon Syntax vs. Arrow Syntax
A switch is classified as a statement or an expression depending on whether it returns a value or not. If it returns a value, it is a switch expression, otherwise a statement. Switch can also use either a colon or an arrow syntax.
Interestingly, the switch style (statement or expression) and arrow/colon syntax are orthogonally related, as shown in the following image:
The preceding matrix is not just limited or specific to switch statements or expressions that define a pattern in their case labels. It applies to switch statements and expressions that define constants too.
As shown in the previous examples, the case labels are no longer limited to constants. Let’s see what they have to offer.
Type pattern – case labels with a data type
In the previous examples, case labels included a data type. This is a type pattern. A type compares the selector expression with a type. If the test passes, the value is cast and assigned to the pattern variable that is defined right after the type name. Let’s pull the exact lines of code from these previous examples:
case Discrimination discrimination -> discrimination.damagingGenerations(); case Discrimination d -> { Discrimination discrimination = ((Discrimination) obj); damage = discrimination.damagingGenerations(); }
Scope of pattern variables
Pattern variables are local variables, which are casted and initialized when a type pattern tests true. Their scope is limited to the case labels in which they are declared – it doesn’t make sense for a pattern variable to be available in a switch branch in which its argument doesn’t match.
When do missing break statements in a switch statement become a compilation error?
In the following example, the pattern variable
d is limited to the case label
Discrimination. When patterns, instead of constants, are used in case labels for switch statements or expressions, missing
break statements is a compilation error because it can result in a default fall-through to a case label that did not pass the test:
Guarded patterns – conditions that follow test patterns
Guarded patterns can help you to add conditions to your case labels, beyond test patterns, so that you don’t have to define another if construct within a switch branch.
Let’s revisit a switch construct from a previous section:
public class MyEarth { int getDamage(Object obj) { return switch (obj) { case AirPollution airPol -> airPol.getDamage(); case Deforestation deforestation -> deforestation.getTreeDamage(); case null, default -> -1; }; } }
Imagine you want to return the value 5000, if the
getAQI() method on an
AirPollution instance returns a value of more than 200. We are talking about two conditions here:
- The variable obj is an instance of
AirPollution
airPol.getAQI()> 200
With the guarded patterns, you can add this condition to the case label, as follows:
public class MyEarth { int getDamage(Object obj) { return switch (obj) { case AirPollution airPol && airPol.getAQI() > 200 -> 500; case Deforestation deforestation -> deforestation.getTreeDamage(); case null, default -> -1; }; } }
It is interesting to note that when you pass an
AirPollution instance with
getAQI() value <= 200,
getDamage() method will execute the default branch and return -1.
Imagine adding multiple conditions to a switch label after the type patterns. While using operators like the conditions OR and AND, the order of execution can be unclear. In this case you can use parentheses to remove all ambiguities. Here’s an example that would return 500 when
getDamage() is called passing it an instance of
AirPollution:; }; } }
If I modify the placement of the parentheses from the preceding code (as shown in the following code snippet), calling
getDamage() passing it an instance of
AirPollution would return -1:; }; } }
Parenthesized patterns
So far, the necessity of parenthesized patterns is very low. It’s only to distinguish guard and expression in instanceof syntax:
if(o instanceof (String s && !s.isEmpty()) — here we use a parenthesized pattern (with guarded pattern inside). It will be more useful in the future with deconstruction patterns.
Pattern dominance – handling general types before specific types in case labels
What happens if the types being checked in switch case labels have an inheritance relationship? You should check for the most specific case, prior to checking for the general type.
Failing to do so would be a compilation error – as shown in the following image, when the code in method
getDamageForDifferentPollutionTypes compares its method parameter
obj with class
AirPollution and
Pollution (class
AirPollution extends
Pollution).
An interesting observation is that with a similar logic it isn’t a compilation error for an if-else statement.
However, in such cases, IntelliJ IDEA would not offer you the option to convert it to a switch. You get the option, when you remove checking a superclass before its subclass, or, perhaps checking for unrelated types:
Should you care about handling all possible values for the selector expression in switch?
Yes, you must have a branch to execute, regardless of the value that is passed to it, if you are using any kind of patterns in case labels with switch expressions or switch statements.
Imagine the following hierarchy of classes:
abstract class Pollution {} class WaterPollution extends Pollution {} class AirPollution extends Pollution {}
Defining a case label which handles instances of type Pollution as the last case label might look obvious in the following code, since switch is returning a value. Since the switch is switching over a reference variable of type pollution, it can be assigned a value of type Pollution or one of its subclasses. In this case a default label is not required:
class MyEarth { static int getDamageForDifferentPollutionTypes(Pollution pollution) { return switch (pollution) { case WaterPollution w -> 100; case AirPollution a -> 200; case Pollution p -> 300; }; } }
Also, you would need to handle all the possible values for method parameter pollution, even when the switch-statement is not returning a value:
class MyEarth { static void getDamageForDifferentPollutionTypes(Pollution pollution) { switch (pollution) { case WaterPollution w -> System.out.println(100); case AirPollution a -> System.out.println(200); case Pollution p -> System.out.println(300); }; } }
Or when using an old-style colon syntax:
class MyEarth { static void getDamageForDifferentPollutionTypes(Pollution pollution) { switch (pollution) { case WaterPollution w : System.out.println(100); break; case AirPollution a : System.out.println(200); break; case Pollution p : System.out.println(300); break; }; } }
Adding a null case to switch is not mandatory to ensure that it handles all the possible values.
Using sealed classes as type patterns – are they treated differently to non-sealed classes?
The short answer is yes they are. Please refer to the section ‘Sealed classes’ in this blog post for their detailed coverage.
Let’s revisit the hierarchy of the Pollution classes from our previous example and modify it by sealing it:
sealed abstract class Pollution {} final class WaterPollution extends Pollution {} non-sealed class AirPollution extends Pollution {}
Now the compiler is sure that the abstract class Pollution has exactly two subclasses. So you can handle values passed to method parameter pollution, as follows:
class MyEarth { static int getDamageForDifferentPollutionTypes(Pollution pollution) { return switch (pollution) { case WaterPollution w -> 100; case AirPollution a -> 200; // case Pollution is no longer required }; } }
This rule applies to the hierarchy of an interface too.
Freedom from defining code that might never execute
I know the title is confusing. To understand what it means, let’s look at an example of a sealed interface and the classes that implement it:
sealed interface Expandable {} record Circle(int radius) implements Expandable {} record Square(int side) implements Expandable {}
Without pattern matching for switch, the if statement in the following code would require you to define an else part even though you have handled both the implementing classes of the interface
Expandable, that is,
Circle and
Square:
public class Geometry { double getArea(Expandable expandable) { if (expandable instanceof Circle c) { return 3.14 * c.radius() * c.radius(); } else if (expandable instanceof Square s) { return s.side() * s.side(); } else { return -1; // This code might never execute } } }
However, this changes when you use pattern matching for switch, as follows:
public class Geometry { double getArea(Expandable expandable) { return switch (expandable) { case Circle c -> 3.14 * c.radius() * c.radius(); case Square s -> s.side() * s.side(); }; } }
Running inspection ‘if’ can be replaced with ‘switch’ on your code base
It can be time-consuming to look for if-else constructs in your code and check if they can be replaced with switch. You can run the inspection ‘if can be replaced with switch’ on all the classes in your codebase or its subset.
With this inspection, you can convert most of the if-statements to switch. I stated ‘most’ of the if-statements and not ‘all’, for a reason. As demonstrated using a lot of examples in the preceding section, you’ll notice that at times IntelliJ IDEA won’t offer you an option to convert an if-else statement to switch, or it might not convert it the way you have assumed it would. This is due to missing adherence to the multiple rules we talked about in this blog.
To run the inspection ‘if can be replaced with switch’, you can use the feature – Run inspection by name, using the shortcut Ctrl+Alt+Shift+I or ⌥⇧⌘I. Enter the inspection name, followed by selecting the scope and other options. The Problems Tool window will show you where you can apply this inspection. You can choose to apply or ignore the suggested changes as you browse the list in the Problems View Window.
We have talked a lot about the pattern matching for switch. Now let’s cover sealed classes and interfaces. Added as a standard language feature in Java 17, they haven’t changed from Java 16.
Sealed classes and interfaces (now a standard:
Here’s the modified code for reference:
public sealed class Plant permits Herb, Shrub, Climber { } public final class Shrub extends Plant {} public non-sealed class Herb extends Plant {} public sealed class Climber extends Plant permits Cucumber{} public:
Let’s quickly check the configuration of IntelliJ IDEA on your system to ensure you can get the code to run it., are added to the switch statements and expressions. This lets you eliminate the definition of code to execute for an unmatched
Plant type passed to the method
process():
int process:
Here’s the code for reference:
sealed public interface Move permits Athlete, Jump, Kick { } final class Athlete implements Move {} non-sealed interface Jump extends Move {} sealed interface Kick extends Move permits Karate {} final class Karate implements Kick {}:
I mentioned that Pattern Matching for switch is introduced as a preview language feature in Java 17. Just in case you are unaware of what preview features mean, I’ve covered it in the next section.
Preview Features.2.1 supports basic support for the pattern matching for switch. More support is in the works. This version has full support for recent additions like sealed classes and interfaces, records, and pattern matching for instanceof.
We love to hear from our users. Don’t forget to submit your feedback regarding the support for these features in IntelliJ IDEA.
Happy Developing! | https://blog.jetbrains.com/idea/2021/09/java-17-and-intellij-idea/ | CC-MAIN-2021-39 | refinedweb | 3,986 | 50.16 |
Ruby Array Exercises: Compute the sum of all the elements
Ruby Array: Exercise-9 with Solution
Write a Ruby program to compute the sum of all the elements. The array length must be 3 or more.
Ruby Code:
def check_array(nums) return (nums[0] + nums[1] + nums[2]) end print check_array([1, 2, 5]),"\n" print check_array([1, 2, 3]),"\n" print check_array([1, 2, 4])
Output:
8 6 7
Flowchart:
Ruby Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Ruby program to remove blank elements from an given array.
Next: Write a Ruby program to split a delimited string into | https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-9.php | CC-MAIN-2021-21 | refinedweb | 105 | 52.19 |
Page -> Node Dependency Tracking
Gatsby keeps a record of used nodes for each query result. This makes it possible to cache and reuse results from previous runs if used nodes didn’t change and, conversely, is used to determine which query results are stale and need to be rerun.
How dependencies are recorded
CREATE_COMPONENT_DEPENDENCY action and
createPageAction action creator
The internal
CREATE_COMPONENT_DEPENDENCY action handles the recording of Page -> Node dependencies. It takes the
path (page path for page queries or internal id for static queries),
path and a
connection tells Gatsby that this page depends on all nodes of this type. Therefore if any node of this type changes (e.g. a change to a markdown node), then this page must be rebuilt. Using connection fields (e.g.
allMarkdownRemark) is one of the cases when this variant is used.
CREATE_COMPONENT_DEPENDENCY action is conditionally dispatched by the internal
createPageAction action creator. Action creator checks if we already have given dependencies stored to avoid emitting no-op actions.
createPageAction is a low level internal API that is then used by higher level APIs.
Higher level abstractions
Node Model
Node Model is an API used in GraphQL resolvers to retrieve nodes from the data store. It’s used internally in resolvers provided by Gatsby core and it can be used in resolvers provided by plugins via
context.nodeModel. It calls
createPageAction under the hood because Node Model is aware of the path of the query as well as the nodes being retrieved.
getNodeAndSavePathDependency helper
getNodeAndSavePathDependency is a convenience wrapper around
getNode and
createPageDependency. It is not used internally. It’s a legacy API for plugins to record data dependencies and is equivalent to
nodeModel.getNodeById. The Node Model variant should be used instead as its API is less error prone. (Node Model is
path aware and doesn’t require you to pass it.)
How dependencies are stored
Page -> Node dependencies are tracked via the
componentDataDependencies redux namespace. to figure out which queries don’t have any dependencies yet. “Dirty” nodes are used to determine which query results are stale and need to be re-executed. Finding queries without dependencies is used as a heuristic to determine which queries haven’t run yet and therefore need to run. | https://www.gatsbyjs.com/docs/page-node-dependencies/ | CC-MAIN-2020-40 | refinedweb | 374 | 55.54 |
From: Johan Nilsson (r.johan.nilsson_at_[hidden])
Date: 2006-11-10 08:56:29
Christopher Kohlhoff wrote:
> Hi Johan,
>
> Johan Nilsson <r.johan.nilsson_at_[hidden]> wrote:
> [...]
>> 2) I've implemented similar code previously, but only designed
>> for portability with linux and Win32. What I very often use,
>> is something corresponding to retrieving errno or calling
>> GetLastError (on Win32).
>>
>> With the current design, it is harder to implement library
>> code throwing system_errors (retrieving error codes) as you'll
>> need to get the corresponding error code in a
>> platform-specific way. Or did I miss something?
>
> I think the intention is that platform specific library
> implementations will initialise the error_code in a
> platform-specific way. At least, this is what asio does.
Boost.Asio as such isn't platform-specific. Wouldn't you (as a library
implementor) prefer to do as much as possible in a portable way?
>
>> 4) I'm wondering about best practices for how to use the
>> error_code class portably while maintaining as much of the
>> (native) information as possible. Ideally, it should be
>> possible to check for specific error codes portably, while
>> having the native descriptions(messages) available for as much
>> details as possible. Using native_ecat and compare using
>> to_errno?
>
> Asio and its TR2 proposal provide a set of global constants of
> type error_code that can be used for portably checking for well
> known error codes. See sections 5.2 and 5.3.2.6 in N2054 for
> more detail:
>
>
I was looking for something like that, yes. Shouldn't that part be included
(but more exhaustively) in the Diagnostics proposal?
Also, I would personally prefer names that are less likely to clash with
others if the containing namespace is brought into scope by using namespace
<ns containing error codes>; error_access_denied,
error_address_family_not_supported etc.
Regards,
Johan
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2006/11/113060.php | CC-MAIN-2021-31 | refinedweb | 319 | 50.33 |
>>."
Re:I'll follow them here too. :D (Score:5, Interesting)
How do I know that MS won't file a software patent related to this work?.
"bring OSS more users" (Score:3, Interesting)
And more *windows* users, more windows license, more vendor lockin, and fewer alternative OS's.. Ya, real nice of them to 'help' us out. No thanks.
Re:I'll follow them here too. MS with the stigma of not actually wanting any other browsers to run on their OS, by making users use a round-about way of getting their browsers of choice, MS could point the finger right back at how much capitol they invested into the alternative software ecosystem, and how they leveraged their power to help bring FOSS and the package manager to their OS.
In short, creating a package manager like this is a good way for MS to be more two-faced than ever.
Not that I am gonna complain; ALL corporations are two-faced, and a well supported package manager, and better acceptance of the win32 platform (Not just windows, there are attempts at FOSS Win32 platforms.) by the FOSS community is a good thing all around.
I just dont think MS is overly concerned that it will compete with their software ecosystem at this point, and is more convinced that government regulators are the bigger threat.:3, Interesting)
How are you handling dependencies?
Will this be the standard windows every app carts around all its own libs, wasted space and outdated/insecure funland?
Re:Just like the other vendors (Score:3, Interesting)
Cygwin at least gives you a usable CLI environment on windows. Who installs this type of server software and does not install Cygwin?
A server without proper gnutools is painful to administer.
Re:wholly native toolchain (Score:3, Interesting)
All but the last one are fine. I have some windows boxes I have to deal with and I sure as hell do not want to be stuck using some GUI IDE just to build the latest $foobar.:I'll follow them here too. :D (Score:3, Interesting)
If your target audience is like me, then you're best off creating an automated conversion tool that can take a standard tarball and create an MSI package (or whatever) to your specifications with minimal human intervention. Ideally, this ought to extend seamlessly to the "make check" incantation, which is an important sanity check for cross platform development, since merely compiling the source successfully is no guarantee of correctness.
Note that doesn't mean that you have to accept *nixish directory names etc, it just means that when such a tool sees a standard tarball construct, it knows how to convert it to something sensible for the Windows platform.
As you pointed out yourself in the post, standard tarballs just work (mostly). You can gain a lot by reusing this property as a foundation for your project, rather than expecting people to adapt to your own design.:Just like the other vendors -compliant by default - VC++2002. The one before it, VC6, was released in 1998, before the final ISO C++ specification came out, so it's kinda silly to hold it against it. If you recall the original story, the "wrong" behavior was actually part of the draft spec at some point - they've been going back and forth on it.
Also, it is really a minor problem by itself, since you can trivially work around it by doing:
or compiling with the equivalent -D compiler flag. This will ensure correct scoping, and will not affect anything otherwise (the compiler will, of course, optimize away the always-false branch).
In contrast, g++ 2.95 (which was the stable version of g++ until mid-2001 - assuming you consider g++ 3.0 stable) didn't even have proper namespace support - it did parse namespace {
... } and using declarations correctly, but pretty much just ignored them, and just dumped all identifiers into global namespace. That is something that is not anywhere near as easy to work around.:I'll follow them here too. goodness to Windows, I may be tempted to switch...no really. Just think, it could reduce the "clutter" that inevitably builds up in a windows system over time (often requiring the 6-monthly reinstall), and updating your entire system would be possible from a single app. Sorry if this sounds like a troll - it really isn't intended to be.
Re:It won paths by yourself. Oh, and don't forget that a lot of OSS stuff has makefiles generated by autoconf, and many autoconf scripts just freak out on MinGW right away..
Re:Just like the other vendors (Score:1, Interesting)
(Posting anonymously because I somehow managed to exceed 50-posts-per-day limit for my karma.)
Correct me if I'm wrong, but why would "windows.h" have any problems with for-loop scoping either way? IIRC, the problem was strictly with MFC and ATL, for both of which there are much better options available, in any case.
I may be wrong here - this problem is a really old one, and I only vaguely recall last time I hit it. If I am (or if you really need MFC/ATL), more general solution in this case - since you're already using the preprocessor - is to use that "#define for
..." trick for your code, but skip it for "windows.h". VC++ has a non-standard way of doing that in form of push_macro and pop_macro pragmas, so you can do something like this:
Not very nice, but it's still not that much boilerplate, and it scales well. You can do even better by making your own header which just does #include "windows.h", wrapping it in push_macro/pop_macro, and then using that everywhere.
I don't dispute that it isn't problem, anyway. I do recall it being rather annoying back in the day, but then doing C++ back then was generally annoying, because standard compliance was lacking all over the place. I recall being similarly frustrated by e.g. lack of std::vector::at() in g++ standard library - God knows why it wasn't there. Or, getting back to VC6, advanced template magic such as partial template specialization was very much hit-or-miss. Oh, and no RVO, which really is a perf killer. And so on. I dare say that, against this background, the for-scope issue is really just a minor part of the overall bleak picture, with a relatively trivial workaround.
Re:I'll follow them here too. server can be configured, or just MS's servers.
Shachar | http://developers.slashdot.org/story/10/04/08/2343205/Microsofts-CoApp-To-Help-OSS-Development-Deployment/interesting-comments | CC-MAIN-2016-07 | refinedweb | 1,104 | 62.07 |
Details
- Type:
Improvement
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
- Fix Version/s: None
-
- Labels:
- Urgency:Normal
- Bug behavior facts:Security
Description
Add GRANT/REVOKE mechanisms to control which jar files can be mined for user-created objects such as Functions and Procedures. In the future this may include Aggregates and Function Tables also. The issues are summarized on the following wiki page:. Plugin management can be tracked by this JIRA rather than by DERBY-2109. This is a master JIRA to which subtasks can be linked.
Issue Links
- incorporates
DERBY-2250 Implement USAGE privilege for Jar files
- Open
DERBY-2252 Add Jar IDs to the EXTERNAL names in routine declarations
- Open
DERBY-2253 Implement SQLJ.ALTER_JAVA_PATH
- Open
Activity
- All
- Work Log
- History
- Activity
- Transitions
I have some questions about the wiki page:.
"Resolving Classes -> 2nd Entry Point Class":
I'm confused about why the Visibility is different for DBCP and SQLJAR. Wouldn't the USAGE privilege control the visibility of jar files stored in the database regardless of whether they appear on derby.database.classpath?
Following Referenced Classes section:
This sentence ends abruptly: "The installed jar's optional classpath is."
.
It appears to me that the wiki-described work could be staged as follows:
1) Implement USAGE privilege. Require that USAGE be granted to PUBLIC on all jars in derby.database.classpath.
2) Require jar ids in EXTERNAL NAME declarations. Introduce pseudo jars SYS.JRE and SYS.CLASSPATH
3) Implement SQLJ.ALTER_JAVA_PATH
> Rick Hillegas commented on DERBY-2206:
> --------------------------------------
>
> I have some questions about the wiki page:
Thanks for reading & providing feedback.
>.
I thought that the assumptions only applied to discussion that followed that section. Before the assumptions section it's definitions of how classes could be loaded, not which are possible. Maybe there needs to be another category of class defined, those from a class loader created by a referenced class.
> "Resolving Classes -> 2nd Entry Point Class":
>
> I'm confused about why the Visibility is different for DBCP and SQLJAR. Wouldn't the USAGE privilege control the visibility of jar files stored in the database regardless of whether they appear on derby.database.classpath?
If you mean the section when the routine has an EXTERNAL NAME with a jarid, then the database class path (DBCP) is not applicable. The class loading is defined by the SQL standard which only loads from the jar the entry point class is in and the classpath for that jar file (if set).
> Following Referenced Classes section:
>
> This sentence ends abruptly: "The installed jar's optional classpath is."
fixed
>
> .
I don't see your logic here. If I have
derby.database.classpath="ONE:TWO:THREE"
then if my routine's entry point is in the THREE jar then referenced classes can be any class from ONE, TWO or THREE, which is as though the THREE jar had the classpath "ONE:TWO:THREE".
So I can't see what is special about the first class specified in derby.database.classpath.
Thanks for the quick responses, Dan. I'd like to continue the discussion of "Setting derby.database.classpath" using the example you gave. Here's some more detail on the problem case I had in mind:
o Suppose that there's an entry point Start_3() in the THREE jar file.
o Suppose that Start_3() calls methods in class Next.
o Suppose that there are three different versions of the Next class, one in each of the jar files.
o Which version of the Next class gets picked up to resolve the references in Start_3()?
It seems to me that the ANSI rules will give you the version of Next which lives in THREE. However, the Derby rules will give you the version in ONE.
The wiki page says that SYS.JRE includes just the classes in the java and javax namespaces. Some other namespaces ship with the JRE: org.ietf, org.omg, org.w3c, and org.xml. Should we include these in the namespaces covered by SYS.JRE?
SYS.JRE - Right, I was thinking about that this morning and it ities into the other thread about bootclasspath and the ability to tell if a class is on the bootclasspath. I think SYS.JRE really means any class on the boot class path, but currently the decision needs to be made before the class is loaded. That is decide to load from the jar classloader or delegate elsewhere. A possible alternative is to load any class using the default mechamism and then decide if it belongs to the JRE or not and make decisions off that.
One more factor is seeing if the statement below has been extended to all classes defined in J2SE or continues to be just the java.* classes:
"First, the ClassLoader will not attempt to load any classes in java.* packages from over the network. "
Possibly if the statement above continues to be true (limited to java.) and this is sufficient for security of the JVM then it's sufficient for Derby and SYS.JRE could just mean the java. classes.
This paper (at the bottom of page 40) has some details on the loading of system classes and restrictions on user class loaders.
"Dynamic Class Loading in the JavaTM Virtual Machine"
Reading that paper also made it obvious that the api's already treat system classes differently, e.g. 'ClassLoader.findSystemClass', thus SYS.JRE should probably map to the concept of system classes as defined by Java.
Thanks for those pointers. I have run a couple experiments on the Sun 1.4, 1.5, and 1.6 solaris vms. I get identical results on all vms. Here are those results:
By default, I have three classloaders:
1) A sun.misc.Launcher$AppClassLoader. This is the classloader returned by ClassLoader.getSystemClassLoader(). It is the ClassLoader which loads classes which I have written. I believe that this ClassLoader is responsible for classes visible on the CLASSPATH. I'm going to call this the System classloader.
2) That ClassLoader has a parent, which is a sun.misc.Launcher$ExtClassLoader. This parent ClassLoader seems to be responsible for loading classes from jar files in jre/lib/ext. I'm going to call this the Extensions classloader.
3) That ClassLoader has a null parent. I sampled some classes which appear in the javadoc for the Sun JREs. All of the the sampled classes have null as their ClassLoader. This includes classes in java., javax., org.w3c.dom.*, etc. I think that this null represents the JRE's ClassLoader.
In trying to map this onto the concepts on the wiki page, I come up with something like the following:
A) SYS.CLASSPATH embraces all classes whose ClassLoader is the System classloader. At a minimum.
B) I don't know where to fit the classes loaded by the Extensions classloader.
C) SYS.JRE could embrace all classes whose ClassLoader is null. I don't know whether the UR classloader is represented as null by other VM vendors. I also don't know if other vendors manage the same set of classes with their UR classloaders. I am worried about defining SYS.JRE as just the classes in java.*. I think that it would be prudent to respect the VM vendor's judgment about which classes cohere together under the UR classloader.
Interesting ... I think it shows that the concept of System class is not what I thought, it's as you say classes from CLASSPATH.
I don't think we can assume that null class loader means JRE class, that's explicit in the Java doc for Class.getClassLoader.
Or at least if getClassLoader() returns null then it is the bootstraploader but you can't guarantee an implementation will use null for the bootstraploader.
I'm pretty sure the IBM vms do not use null.
The "Inside Java 2 Platform Security, Second Edition" makes the claim on page 43 that classes loaded by the bootstrap loader do have 'null' for a class loader. They base the claim on the "Dynamic Class Loading in the JavaTM Virtual Machine" paper, but I can't see any evidence in that paper, and that statement contradicts the javadoc.
(the same page 43 also covers that "system classes" as defined by the JRE are really application classes)
Maybe it's not worth making the split between classes provided by the JRE and those by the enviroment (CLASSPATH)?
Maybe one pseudo jar: SYS.JRE_ENVIRONMENT ??
You might already be aware of this, but you can change which classes are loaded by the bootstrap classloader by using one of the -Xbootclasspath options. In a trusted environment you can probably use the classloader == null check to determine if a class is a "JRE class", but in general any class can be a "JRE class".
I also confirmed that both IBM and BEA VMs use null for the bootstrap classloader, but as Dan points out, I don't think this can be guaranteed. It also seems like these two vendors use the Sun application classloader (sun.misc.Launcher$AppClassLoader), but if it really is Sun's implementation I don't know.
I think Dan's suggestion about combining the two categories of classes makes sense, if it goes well with the security model being worked out.
I think that making this simpler is fine. I'm happy with one pseudo-jar which wraps everything loaded by the system, extensions, and bootstrap classloaders. As a nit, I'd sand down the name to something shorter. Maybe SYS.ENV.
I would be comfortable with this usage:
1) SYS.ENV is never mentioned in the jar-specific classpaths set by SQLJ.ALTER_JAVA_PATH. Instead, everything in SYS.ENV can be referenced, implicitly, by user code.
2) The only purpose of SYS.ENV is to qualify EXTERNAL NAMEs when declaring procedures and functions.
3) SYS.ENV starts out with USAGE granted to the database owner. Initially, only the database owner can publish entry points in SYS.ENV.
"The only purpose of SYS.ENV is to qualify EXTERNAL NAMEs when declaring procedures and functions. "
Does this mean that SYS.ENV would explicitly be needed as the jarid in EXTERNAL NAME?
What about routines that are declared without a jar identifer?
SYS.ENV looks a little forced as it doesn't behave like a regular jar file, I think due to merging the JRE and CLASSPATH classes which I think is the correct approach.
In some cases it seems like EXECUTE permission on sqlj.install_jar would have the same functionality in terms of security as USAGE on SYS.ENV.
I guess it will become clearer with a functional spec for DERBY-2252, the details of how SYS.ENV would work are not clear to me.
As I read the SQL standard, it seems to me that jar ids are mandatory parts of the EXTERNAL NAME. Without a jar id, you should get a syntax error when declaring a procedure/function. This, at least, is how I read sections 9.8 and 5.2 of part 13 of the SQL standard.
I agree that if you allow someone to install a jar file, then you are implicitly allowing them to call any method in the JRE, the extensions jars, and the CLASSPATH.
We could say that the only way to publish those methods is through wrappers in installed jar files. However, this seems a little awkward to me. In addition, I think it would raise upgrade issues for customers who have already published entry points in the JRE.
I'm working on a spec now.
> As I read the SQL standard, it seems to me that jar ids are mandatory parts of the EXTERNAL NAME. Without a jar id, you should get a syntax error when declaring a procedure/function.
Derby already provides the extension to allow method names without a jar identifier. I don't think that functionality should be removed.
I think I have suggested the wrong search order. According to the "Enable database class loading with a property" section of the Derby Developer's Guide, SYS.ENV should be prefixed to the beginning of the classpath specified by ALTER_JAVA_PATH, not appended to the end.
> Derby already provides the extension to allow method names without a jar identifier. I don't think that functionality should be removed.
If we allow this deviation from the standard, then, as you asked earlier, we will have to settle on a meaning for unqualified EXTERNAL NAMEs. To me, SYS.ENV seems like the best default here.
At this point, we are talking about an API which diverges from the SQL standard in several ways:
A) Jar ids, which are mandatory in SQL, are optional for us.
B) We have invented a pseudo-jar SYS.ENV, which does not appear in SQL.
C) We cannot preserve the derby.database.classpath search order without breaking the rules for the jar-specific classpath set by SQLJ.ALTER_JAVA_PATH.
I have two misgivings:
1) I am worried that we will confuse both ourselves and our customers with an API which is neither the SQL standard nor the old, familiar Cloudscape API.
2) I am worried that we have not stepped back and discussed the customer experience in terms of upgrade expectations and default, out-of-the-box behavior.
Right now, I'd like to get some clarity on issue (2). Are we expecting any of the following:
i) That Derby will be secure-by-default? Or is routine-security something you have to explicitly opt into?
ii) That users upgrading to 10.3 won't have to change their applications?
Here's another crack at this:
1) The default behavior for Derby is the current behavior with all of its security holes for java routines.
2) To get secure behavior for java routines, the customer has to explicitly opt-in. Let's be vague about what that entails right now.
3) If you do opt-in, then you get the SQL standard behavior:
3a) Jar ids are mandatory.
3b) There is no SYS.ENV pseudo-jar. Instead, to access methods in the JRE you have to include little wrapper methods in your jar files that you loaded into the database.
3c) The search order for customer-written routines is SQL standard: First we look in the jar file where the routine lives. Then we look in the other jar files in the order specified by SQLJ.ALTER_JAVA_PATH. Then we defer to the system class loader.
3d) At runtime, when we invoke the routine, we make sure that it actually lives in the declared jar file.
I agree (and has been my assumption all along) that if a jarid is specified then the behaviour for that routine should be defined by the SQL standard.
I'm not sure if opting in means that the ability to define routines without jars is not available though.
DERBY-2250 and DEBRY-2252 could be addressed (following the SQL standard) without resolving how to opt-in or how to make non-jarid routines secure. Those could be separate tasks. Ie. remove the 'pseudo-jars' from DERBY-2252.?
Rick wrote:
--------?
--------
How about really simple?
derby.database.classpath - not set (default) - no user defined routines allowed
derby.database.classpath= (empty string) - entry points in JRE classes allowed
derby.database.classpath=valid path - entry points in JRE and listed jars allowed.
Dan wrote:
>How about really simple?
>derby.database.classpath - not set (default) - no user defined routines allowed
>derby.database.classpath= (empty string) - entry points in JRE classes allowed
>derby.database.classpath=valid path - entry points in JRE and listed jars allowed
OK, this is progress. Still puzzling to me, though: What is the upgrade story here? In 10.2, under all of these settings of derby.database.classpath, you can publish entry points both in the JRE and on the CLASSPATH.
.
Here's my reading of section 9.8 (<SQL-invoked-routine>) of part 13 of the SQL Standard:
Syntax Rule 9 seems to say that at creation-time, the jar file needs to contain a class by the given name and inside that class (or its superclasses) there needs to be a method by the given name. However, the signatures don't have to match.
Syntax Rule 15 seems to say that it is implementation defined whether signatures are matched at creation-time or execution-time.
Here is my reading of section 11.2 (SQLJ_REPLACE_JAR procedure) of part 13 of the SQL Standard:
Syntax Rules 8 and 9 seem to say that an error is raised if the CREATE PROCEDURE/FUNCTION statements cannot be replayed against the new jar file. That is, whatever the creation-time rules are, they must still be valid after replacing the jar file.
Unsetting Fix Version on unassigned issues.
In order to not break legacy applications, I think we can't change the meaning of currently legal settings for derby.database.classpath. However, we could use derby.database.classpath as the opt-in signal. For instance, setting derby.database.classpath to the value "sys.sqlj" could be our signal to enforce the SQL Standard.
But in other cases the specs indicate breaking existing applications is ok, such as booting the network server with a security manager, limiting database creation, database shutdown, upgrade & encyption.
I think affecting exsiting applications is ok if by default it closes security holes, especially when they is an easy workaround (boot the network server with -unsecure, set derby.database.classpath to an empty string).
I also think that opting into the secure mode is the incorrect default (as in derby.database.classpath=sys.sqlj), Derby should be secure by default and have the flexibility to reduce restrictions.
I've also been thinking that since the various security changes assume system and/or database authentication, maybe some of the restrictions should not be enforced if authentication is not enabled.
E.g. with no authentication:
allow any user to create databases & shutdown the system
allow any user to upgrade, encrypt, shutdown database
allow any routine entry point
and maybe
don't boot the network server from the command line if no authentication unless '-unsecue' is set. This stops a false sense of security.
I think this changes would make the impact on existing users less.
I like the idea that Derby is secure by default. I hope this can be squared with the community's expectations that upgrade is painless. Those, at least, seemed to be our expectations during the last release cycle.
Let's run with your proposal that secure-by-default trumps painless-upgrade. It looks like you are suggesting that customers opt into security when they turn on authentication. I'm a little distressed that authentication does not imply that the GRANT/REVOKE machinery is turned on and that we have two independent knobs for these behaviors. We certainly need to turn both of these knobs in order to secure Java routines as described by this JIRA. It would be better if we had a single knob whose meaning was "run securely." Maybe the best we can do, going forward, is say that we recognize one setting of knobs which means "run securely" and that over time, as we plug security holes, customers who set the knobs this way will get better default security although maybe at the cost of some upgrade friction. Let me try to make this concrete by further refining your proposal:
If the customer sets BOTH requireAuthentication and sqlAuthorization to true, then:
1) Booting the network server installs a SecurityManager
2) Shutdown/upgrade/encrypt database powers are restricted to the database owner.
3) Derby ignores the setting of derby.database.classpath and, instead, manages security for Java routines according to the SQL standard.
If the customer fails to turn on one of these properties, then:
1') The network server fails to boot from the command line unless -unsecure is specified.
2') Anyone who can connect to a database can shut it down, encrypt it, and upgrade it.
3') Security for Java routines is what it was in 10.2, and Derby uses derby.database.classpath.
I don't think we should require sqlAuthorization in order to run the network server by default with a security manager.
I think the non-grant/revoke setup is still secure, just a different approach, one that trusts all authenticated users.
Maybe there are some security holes you are thinking of for client server in this mode?
I think the ability to use Java routines with entry points directly against classpath or JRE classes should continue to be allowed in SQL authorization mode, with the security enhancement that requires setting derby.database.classpath.
I don't have any client/server security holes in mind. Just reservations about how many knobs we have.
Let's focus on java routine security, then. I think you're proposing that if a jar id turns up in a routine's EXTERNAL NAME, then the SQL Standard prevails. Otherwise, the old rules prevail--modulo your suggestion that, by default, derby.database.classpath is unset and this prevents all user-published routines from running. We don't know how disruptive that last point will be after upgrade.
Let me make sure that we're on the same page here:
WITH JAR ID
If an EXTERNAL NAME contains a jar id, then we check for USAGE privilege on that jar file at DDL time. At run-time, the classpath is determined by SQLJ.ALTER_JAVA_PATH. USAGE privilege is also checked when running SQLJ.ALTER_JAVA_PATH.
WITHOUT JAR ID
If an EXTERNAL NAME does not contain a jar id, then there's no USAGE privilege to check at DDL time. At run-time, the classpath is determined by the old derby.database.classpath rules, modulo your suggestion about the default derby.database.classpath state. I don't think we need to require USAGE privilege in order to wire jar files into derby.database.classpath. I think security here is managed fine by restricting the power to change derby.database.classpath.
I think USAGE should be required to add a jar to derby.database.classpath, otherwise granting permission to set a database property implicitly grants usage on all installed jars. Seems cleaner to keep the separation and have grants be explicit.
I'm also assuming that jarid is only accepted in sql authorization mode as it doesn't make much sense without the ability to support GRANT USAGE.
I think that giving a user the power to set all database properties is pretty much a statement that they are a DBA. Combine that with the restrictions on who is allowed to install jar files and I don't see much security added by requiring USAGE on the specific jars in derby.database.classpath. It is easier for me to reason about this privilege if its meaning is just defined by the SQL standard.
I agree that jar ids only make sense if sql authorization is turned on. Thanks for bringing this up.
> I think that giving a user the power to set all database properties is pretty much a statement that they are a DBA
That's an interesting comment, is it the case because it should be that way, or it just happens to be so because there are security holes that have not been closed yet?
I tend to think it's the latter. granting permission to set database properties should not, I think, implictly mean the recipient is all powerful.
If it is decided that we don't want to close such holes then there should be clear documentation that granting EXECUTE on SYSCS_SET_DATABASE_PROPERTY allows that user to bypass all security mechanisms for that database.
I think you're right It's unfortunate that having the privilege to set one database property means that you have privilege to set them all. We might want to revisit this decision.
> It's unfortunate that having the privilege to set one database property means that you have privilege to set them all.
I found out this is not true. The database owner can provide a setup where a user can just modify one database property, such as derby.database.classpath. How to do it is a liitle non-obvious now, but it is possible. In the future it will become easier when "run as definer" mode is implemented for routines. So I think the assumption that being able to change a database property means one has the same powers as the database owner is an invalid one. Thus I believe that to add a jar file to derby.database.classpath should require USAGE priviledge to PUBLIC on that jar.
The "run as definer" mode I think is a good future step to resolve a number of the security issues such as:
- ability to set specific database properties (e.g database owner provides a run as definer routine that sets a single property, and grants EXECUTE on that).
- ability for others to shutdown a database (database owner provides a run as definer routine that shuts the database down).
and it's standard.
Here's the non-obvious way to provide ability for ALICE to set the database classpath without granting EXECUTE on SYSCS_SET_DATABASE_PROPERTY
1) database owner creates a table with a CP_VALUE column VARCHAR(30000)
2) database owner creates a trigger on INSERT of the table with an action of
CALL SYSCS_SET_DATABASE_PROPERTY('derby.database.classpath', NEW.CP_VALUE)
3) database owner grants INSERT to ALICE on the table
Now when ALICE inserts a value into the table it will be set as the database classpath. This takes advantage of the fact that triggers run as definer.
Optional steps are:
4) database owner creates a one argument procedure that inserts into the table
5) database owner grants EXECUTE on the procedure to ALICE (the INSERT grant is still needed).
The ALICE can use something like
CALL DBO.SET_CLASSPATH('ALICE.MYJAR');
(everything can always be solved by adding indirection
The table can also track the history of who changed the class path when. I have a JUnit test that demonstrates this, I will commit sometime in the next couple of days.
I'm afraid I don't see the need for maintaining two independent ways to manage java routine security. I think that's going to confuse users. I think there should be one way to do this, it should be straightforward, and, if possible, it should adhere to a standard that someone has thought through carefully. I am worried about overloading the meaning of USAGE via on-the-fly designs. The following makes sense to me as the single, straightforward way to manage java routine security:
1) derby.database.classpath is not set. As soon as you set this property, you blow a hole in your security because, now, any entry point on the system classpath can be called out-of-context.
2) The DBA role retains exclusive power to run SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY. This prevents derby.database.classpath from being set.
3) The only way to declare usable procedures/functions is to use the SQL Standard syntax, including jar ids, USAGE privilege, and jar-specific classpaths.
That's a very clever technique you discovered for granting privilege to set properties on a per-property basis. However, to me it looks like a sneaky way to subvert security. Customers should be discouraged form opening that kind of security hole.
> I'm afraid I don't see the need for maintaining two independent ways to manage java routine security.
but I think you are proposing two different security mechanisms.
I think you are proposing that if I have a jar file then I can control USAGE on it with GRANT/REVOKE but also USAGE can be given to others without my knowledge by the dbo granting the right to set the derby.database.classpath property.
I'm saying that if I have a jar file then the I control USAGE on it purely with GRANT/REVOKE.
Seems to be the former is more confusing. All I'm proposing is an extension of the existing GRANT USAGE behaviour, namely USAGE on the jar must be granted to PUBLIC in order to use the jar in the public derby.database.classpath.
I also think that security needs to be designed by what is possible for any user to do, not just what is recommended.
While it's a clever technique to allow per-property setting to be granted to individuals, it is possible and thus must be taken into account by security related changes. In addition, the very concept of definer invoked routines is designed for this type of restricted access, so I can't see it as a "sneaky way to subvert security". And at some point Derby will support such routines, so designing with those in mind I would say is a good approach.
Clarified the bug summary line. | https://issues.apache.org/jira/browse/DERBY-2206 | CC-MAIN-2016-07 | refinedweb | 4,806 | 64.61 |
Feature #8172
IndexError-returning counterparts to destructive Array methods
Description
=begin
There are a few desctructive (({Array})) methods that take an index as an argument and silently insert (({nil})) if the index is out of range:
a = []; a[1] = :foo; a # => [nil, :foo] [].insert(1, :foo) # => [nil, :foo] [].fill(:foo, 1, 1) # => [nil, :foo]
Among them, (({Array#[]})) has a counterpart that returns an (({IndexError})) when the index is out of range:
[].fetch(1) # => IndexError
and this is useful to avoid bugs that would be difficult to find if (({Array#[]})) were used. However for (({Array#insert})) and (({Array#fill})), there are no such counterparts, and that fact that these methods silently insert (({nil})) is often the cause of a bug that is difficult to find. I suggest there should be some versions of these methods that return (({IndexError})) when index is out of range:
[].insert!(1, :foo) # => IndexError [].fill!(:foo, 1, 1) # => IndexError
I believe this would make debugging easier.
=end
History
Updated by sawa (Tsuyoshi Sawada) about 6 years ago
=begin
In the above, I missed to say that there is no counterpart for (({Array#[]=})). There should be one for it as well, but I cannot think of a good method name.
=end
Updated by nobu (Nobuyoshi Nakada) about 6 years ago
=begin
(({Hash})) has (({#store})) as an alias of (({#[]=})).
=end
Updated by duerst (Martin Dürst) about 6 years ago
This may be just an issue of wording: You say "index is out of range".
By definition, Ruby arrays don't have a range. They can grow
dynamically. In many cases, this is a big feature.
Also, you complain about inserting nil. So what about the following case:
a = [1, 2, 3]; a[3] = :foo; a # => [1, 2, 3, :foo]
There is no nil, so maybe this is okay. But the array is expanded.
Should there be an error?
Regards, Martin.
On 2013/03/27 10:51, sawa (Tsuyoshi Sawada) wrote:
Issue #8172 has been reported by sawa (Tsuyoshi Sawada).
Feature #8172: IndexError-returning counterparts to destructive Array methods
Updated by sawa (Tsuyoshi Sawada) about 6 years ago
=begin
Martin
For clarification, I meant to have it return an error only in cases where (({nil})) needs to be inserted otherwise. So cases like the following should not return an error:
a = [1, 2, 3]; a[3] = :foo; a # => (actually it should be a different method name) [1, 2, 3, :foo] [1, 2, 3].insert!(3, :foo) # => [1, 2, 3, :foo] [1, 2, 3].fill!(:foo, 3) # => [1, 2, 3, :foo]
=end
Updated by matz (Yukihiro Matsumoto) about 6 years ago
- Status changed from Open to Feedback
I am not against the idea itself, but using bang (!) for the names is not consistent with other bang methods.
Matz.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/8172 | CC-MAIN-2019-22 | refinedweb | 462 | 71.14 |
The following forum message was posted by fabioz at:
Python 2.5 doesn't support the with statement unless you add an import for it
(Python 2.6 onwards doesn't require it):
from __future__ import with_statement
Cheers,
Fabio mvschenkelberg at:
That did it!
Thanks!
Max
The following forum message was posted by fabioz at:
Ok, just exported it... please follow the instructions at: and let me know if
it works for you (if it doesn't, please let me know so that I can update it).
Cheers,
Fabio
The following forum message was posted by fabioz at:
Well, not sure if that's it, but your Project\src\TestPkg MUST have an __init__.py
to be recognized as a python package, so, can you check if you have it?
If you have it, can you attach your sample project to a bug report so that I
can take a look on what may be wrong?
Cheers,
Fabio
The following forum message was posted by bsnahrwold at:
I see that many other posts have been made regarding unresolved imports. I
have really
tried to review the docs but am not getting anywhere trying to import between
packages.
I have the followings code structure:
Project\src\TestPkg\TestMod.py (containing TestClass)
Project\src\TestPkg\TestUnitTest.py
TestUnitTest.py contains: from TestMod import TestClass
No problems. It all works with Project properties:
Jython, grammmar version 2.2, Interpreseter jython 2.2.1 (points to
c:\jython2.2.1\jython.jar)
PYTHONPATH contains:
/Project/src/
/Project/test
I have tried configuring c:\jython2.2.1\registry with:
python.path = C:\\PathtoProject\Project\src;C:\\PathtoProject\Project\test
If I create a second package in this same project
Project\test\UnittestPkg\TestUnitTest.py
I am unable to import from TestPkg:
from TestPkg.TestMode import TestClass
Code Assist does not see TestPakg, only UnittestPkg.
I had created a number of applications that perform this type of importing between
packages, but outside of Eclipse and the src folder requirement.
If I don't develop with src forders, I have no code assist. If I develop in
Eclipse with
Source folders, (whether in native Eclipse or Apcelerator 3.0), I get unresolved
imports
and code assist sees nothing.
Is there a tutorial somewhre that describes how to do these kinds of imports
between
packages using jython as the interpreter in Apcelerator 3.0? Or does anyone
have a
recommendation on how to successfully configure these types of packages?
Thanks,
Bryan Nahrwold
The following forum message was posted by mvschenkelberg at:
I was attempted to install pydev using the eclipse p2 director application and
ran into untrusted certificate issues. I found this thread talking about it:
&atid=577329
I went to the eclipse forums to ask them about it and there is a bug report
to allow the director to install jars with untrusted certificates but for now
the workaround is to manually add the certificate you used to sign the jars
to our keystore using:
ts
I was wondering if you guys could post the certificates you used so I could
import it and install via the director as it is required for our project.
I used this command to extract out one of our certificates as a test:
keytool -exportcert -alias aliasname -file aliasname.cer
I was able to list the certificates we had through:
keytool -list
Max
The following forum message was posted by fabioz at:
It's possible, but it's currently not in the UI (although that's already planned).
To enable it, do in your code (note that pydevd will always be there when you
debug, so, it doesn't really need to be in your pythonpath):
import pydevd;
pydevd.set_pm_excepthook(exceptions)
where exceptions is a tuple of exceptions to be handled (if not passed, any
uncaught exception will be gotten).
Cheers,
Fabio
The following forum message was posted by fabioz at:
I haven't checked this, but I think it goes something like
... get the ScriptConsolePage ...
scriptConsolePage.getViewer().getDocument().replace(offset, length, text) <--
just make sure to always only add content to the end of the document (adding
to other places may have bad repercussions) -- whenever you add a new line,
that line will be evaluated.
Also, not sure exactly what you plan to do, but if you add content while it's
waiting for a response of the server I'm not sure what will happen (it'll probably
just ignore what you've written, but I may be wrong there).
Cheers,
Fabio
The following forum message was posted by at:
Hello,
I am trying to send python commands to the pydev console inside an eclipse plugin.
I have tried this:
MessageConsole console = findConsole ("Pydev Console [0]"); //
finds the associated console
MessageConsoleStream out = console.newMessageStream();
out.println("print 8"); // nothing happens
Which does not seem to do anything. I probably need to use a different class
somewhere in pydev to do this but I'm not sure where.
Thanks,
Brian
Hello All,
Breakpoints won't work in pydev remote debugging.
Eclipse version Version: 3.5.1.R35
pydev version 2.0.0.2011040403
python version 2.4 OR 2.6
I have eclipse running in my box, whereas python scripts are in a test server. Python scripts should run from that server, because it has to process huge amount of data which i can't copy to my local box.
So i gone ahead and tried remote debugging with pydev. Note that i have already used eclipse & pydev to debug python scripts in my local box.
I installed latest pydev.
Copied org.python.pydev.debug_2XXXX/ into test server. Location of debug module is same in test server and in my box.
Copied python scripts into my local box. Location of scripts are same in test server and in my box.
Added the following lines to the test script in remtoe server.
import sys
sys.path.append('XXXXX/eclipse/plugins/org.python.pydev.debug_2.0.0.2011040403/pysrc')
import pydevd
pydevd.settrace(host='XXX.XX.XX.XX',stdoutToServer=True, stderrToServer=True)
Started Debug server in Eclipse my local box.
Started the script in test box.
When remote debugged first time it prompted for location of the script locally. It waits at the first instruction after pydev.settrace().
Everthing is okey if i go step by step, F5/F6.
Whereas when i hit F8, it continues without respecting breakpoints.
debug output (trimmed pydb to java traffic)
DEBUG_TRACE_LEVEL = 1
DEBUG_TRACE_BREAKPOINTS = 2
pydev debugger: warning: psyco not available for speedups (the debugger will still work correctly, but a bit slower) ('Connecting to ', 'XX.XX.XX.XX', ':', '5678')
('Connected.',)
('received command ', '501\t1\t1.1')
('received command ', '111\t3\t/XXX/test.py\t71\t**FUNC**add_all_fields\tNone')
Added breakpoint:/sas3/XXX/test.py - line:71 - func_name:add_all_fields ('received command ', '111\t5\t/XXX/test.py\t135\t**FUNC**\tNone')
Added breakpoint:/sas3/XXX/test.py - line:135 - func_name:
('received command ', '101\t7\t')
('received command ', '114\t9\tpid15847_seq1\t277042480\tFRAME')
('received command ', '112\t11\t/XXX/test.py\t135') Removed breakpoint:/sas3/XXX/test.py ('received command ', '111\t13\t/XXX/test.py\t135\t**FUNC**\tNone')
Added breakpoint:/sas3/XXX/test.py - line:135 - func_name:
('received command ', '106\t15\tpid15847_seq1')
(DEBUG) - cfgparser - 2011-04-21 09:01:22,860 - Processing config file:
Traceback (most recent call last):
File "/XXXX/eclipse/plugins/org.python.pydev.debug_2.0.0.2011040403/pysrc/pydevd_comm.py", line 310, in OnRun
self.sock.send(out) #TODO: this does not guarantee that all message are sent (and jython does not have a send all)
error: [Errno 32] Broken pipe
Cheers,
Uday.
Hello PyDev users,
Does anyone know how to turn on the hyperlinking of Python tracebacks in the console output generated by an external tool ?
By external tool, I mean a custom command-line tool that is accessible under the menu Run -> External Tools -> ...
Thanks for your help,
Nicolas Maquet
Atos Worldline SA/NV - T&P>ENG>DEP
nicolas.maquet@...<mailto:cedric.meuter@...>
Phone : +32.(0)2 727 61 68
Atos Worldline is an Atos Origin company :<>
Haachtsesteenweg 1442 Chaussée de Haecht- 1130 Brussels Belgium
>>> Before printing this e-mail, think about the environment <<<
________________________________
Atos Worldline SA/NV - Chaussee de Haecht 1442 Haachtsesteenweg
- 1130 Brussels - Belgium
RPM-RPR Bruxelles-Brussel - TVA-BTW BE 0418.547.872
Bankrekening-Compte Bancaire-Bank Account 310-0269424-44
BIC BBRUBEBB - IBAN BE55 3100 2694 2444
."
The following forum message was posted by eldergabriel at:
0) PyDev rocks; thank you (I intend to donate very soon).
1) My problem - I saw there's a related thread from tim-erwin, started about
a week ago. But my situation is a little different, and I suspect this problem
could be fairly widespread. My development machine is running macosx v10.5.8,
btw.
In the past, I have been able to remove the configured python interpreter from
the eclipse -> pydev preferences, and then re-add it using the auto config button
/ tool. I've had to do this to fix unresolved import errors (even tho the module(s)
in question were handily importable using the interactive interpreter in a bash
shell, but that's another story), and up until the most recent attempt to do
this a few weeks ago, this has worked without any hiccups.
[i]Now[/i], the auto config seems to find all the standard macosx python library
framework folders (plus the wxpython ones I installed), presenting a dialog
box showing the automatically generated list, with their selectable checkboxes.
No matter what combination of folders I do or do not select here, I get the
aforementioned error message after clicking the OK button. It won't let me add
the interpreter and set the PYTHONPATH with the folders that it is finding;
this is bad.
Troubleshooting steps taken thus far:
This is not due to an eclipse / pydev configuration or preference that is specific
to my user profile, as it also happens when I log in as a different user, and
try to configure the interpreter in pydev.
I uninstalled the eclipse galileo version that I was running when the problem
first started, and then downloaded and installed the eclipse ide for java developers,
32-bit helios v3.6.2, followed by cdt (enabled the built-in repo) and then pydev
(via the eclipse marketplace). No change after this.
Taking the hint from the error message that it was looking for the .py stdlib
files, I searched the hard drive and python framework folders. The
/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5 folder
contains no .py files; just .pyc and .pyo files.
What I find particularly odd about this happening now is that it appears that
all the library / framework .py files were never present from the start (afaik).
And I certainly wouldn't have deleted them, if they had been present. Perhaps
some recent system update removed or compiled the .py files?
In short, why would pydev be insisting that the python stdlib .py files be present
[i]now[/i], even though I've been using pydev for the past two years? Until
recently, I've had no problems using the interpreter auto config, especially
to have it re-detect standard libraries to correct unresolved import errors
for modules that were clearly already present on the system.
I don't want to download and install the mac-specific python distributions from
the python foundation (if this can be avoided), as the target machines for my
python development projects will, by default, only have the official python
framework provided by the OS vendor (i.e. apple). The lack of wxpython by default
on these machines is bad enough.
Sorry so verbose, but I didn't want to omit any possibly relevant info. Help
would be greatly appreciated at this point. The interpreter and framework are
functionally present, and I don't know if it's a bug in pydev or eclipse that's
preventing me from being able to use it like it was working before, or what.
I'd really like to get back to using pydev for my development projects.
thanks in advance,
- gabriel
The following forum message was posted by at:
Is it possible to make the pydev debugger break on uncaught exceptions? How?
If i goto window->preferences->java->debug i see an option to suspend execution
on uncaught exceptions, but there is no such option under
window->preferences->pydev->debug.
The following forum message was posted by fabioz at:
It seems there really was an error during the install... (probably the network
got down during the installation and eclipse got lost somehow). You can try
uninstalling and reinstalling again...
It's not necessary to use Aptana Studio 3, but using it has the advantage that
PyDev is preinstalled :)
Cheers,
Fabio
The following forum message was posted by zbor at:
Yes,Fabio. Troubles occur when i create Py project. I have installed successfully
PyDev(or not).
Thx, after replacement of links, the error "cant find repository" disapperared.
But it still doesn't work. When i am trying to create new project a window occurs
"The selected wizard could not be started."
[img][/img]
When i go to Preferences-PyDev there is an error "Unable to create the selected
preference page."
[img][/img]
P.S. is it necessary to install Aptana Studio ?
Hi Fabio,
I have performed more research and it appears to be actually a bug with
RSE - for example if one switches the project to JavaScript the error is a
little more descriptive but of same type:
Errors occurred during the build.
Errors running builder 'JavaScript Validator' on project
'KITSrv01_flyermail'.
java.lang.NullPointerException
Thank you for a quick response! On a side note Fabio just curious what
your Eclipse setup consists of?
Thanks
On Mon, 18 Apr 2011 15:26:42 -0700, Fabio Zadrozny <fabiofz@...>
wrote:
>@...
>
--
KL Insight
Daniel Sokolowski - Web Engineer
933 Princess St. Suite 202
Kingston, On K7M 1H3
v. 613-344-2116
e. daniel.sokolowski@...
The following forum message was posted by zbor at:
Hi all.
I encountered the problem. When i am trying to create new PyDev project, an
error occurs:
Error
Unable to load the repository
Unknown Host:
When i open this link() in my browser, it redirects
me here -
So i understand my Eclipse: he can not find repository because there's nothig
to look at this link...
Does anybody know how to solve this problem?
P.S. I installed PyDev as described here
[url][/url] (Help-Install New software-and
so on...)
Thank you..
The following forum message was posted by cito at:
Thanks, that worked and installed the latest PyDev.
The following forum message was posted by fabioz at:
Which PyDev version are you using?
The following forum message was posted by fabioz at:
It seems that URL is outdated (I'll have to update the PyDev homepage too).
Please read the instructions at: to get the
latest nightly of Aptana Studio 3.
Cheers,
Fabio
The following forum message was posted by cputoaster at:
when trying to start a django app in pydev, I get this error:
[code]ImportError: Settings cannot be imported, because environment variable
DJANGO_SETTINGS_MODULE is undefined.[/code]
I do have it defined in the project PyDev - PYTHONPATH -> String Substitution
Variables, and can also see it in the pydev django interactive shell via
os.environ
Does anybody have any idea what could be the problem?
Cheers,
Andres
The following forum message was posted by fabioz at:
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/pydev/mailman/pydev-users/?viewmonth=201104&style=flat | CC-MAIN-2017-17 | refinedweb | 2,624 | 64 |
Table Of Contents
Glossary
A
A record
DNS Address resource record. Maps a host's name to its address and specifies the Internet Protocol address (in dotted decimal form) of the host. There should be one A record for each host address.
address block
Block of IP addresses to use with DHCP subnet allocation that uses on-demand address pools.
addrblock-admin,
addr-block-admin-
readonly
Address block administrator. Web UI base role that has unconstrained DHCP address block administration privileges. There is a read/write and read-only variant to this role.
admin
Default name of the Web UI superuser or global administrator.
administrator
User account to adopt certain Web UI functionality, be it defined by role, constrained role, or group.
alias
Pointer from one domain name to the official (canonical) domain name.
See
BIND.
BIND
Berkeley Internet Name Domain. Implementation of the Domain Name System (DNS) protocols.
See
CMTS.
cache
Data stored in indexed disk files to reduce the amount of physical memory.
caching name server
Type of DNS server that caches information learned from other name servers so that it can answer requests quickly, without having to query other servers for each transaction.
case-sensitivity
Values in Network Registrar are not case-sensitive, with the exception of passwords.
CCM
See
Central Configuration Management (CCM) database.
ccm-admin,
ccm-admin-
readonly
Central configuration administrator. Web UI base role that has privileges to administer the Central Configuration Management (CCM) database. There is a read/write and read-only variant to this role..
CMTS
Cable modem termination system. Either a router or bridge, typically at the cable headend. changeset database and MCD.
D
Data Over Cable Service Interface Specification
See
DOCSIS.
delegation
Act of assigning responsibility for managing a DNS subzone to another server..
DHCP option
DHCP configuration parameter and other control information stored in the options field of a DHCP message. DHCP clients determine what options get requested and sent in a DHCP packet.
dhcp-admin,
dhcp-admin-
readonly
DHCP server administrator. Web UI base role that has unconstrained DHCP server administration privileges. There is a read/write and read-only variant to this role.
DHCPACK
Acknowledgment used in a positive response to a DHCP request.
DHCPDISCOVER
Initial request for an IP address from the DHCP client to the server.
DHCPNACK
Acknowledgment used in a negative response to a DHCP request.
DHCPOFFER
Offer of an IP address sent by the DHCP server after receiving a DHCPDISCOVER from the client.
DHCPRENEW
Request from the DHCP client to the server for the renewal of an IP address.
DHCREQUEST
Client request for an IP address after receiving a DHCPOFFER from the DHCP server.
Digital Subscriber Line.
DOCSIS
Data Over Cable Service Interface Specification. Standard created by cable companies in 1995 to work toward an open cable system standard and that resulted in specifications for connection points, called interfaces..
Domain Name System
See
DNS.
dotted decimal notation
Syntactic representation of a 32-bit integer that consists of four eight-bit numbers written in base 10 with dots separating them for a representation of IP addresses. Many TCP/IP application programs accept dotted decimal notation in place of destination machine names.
DSL
See
Digital Subscriber Line.
dynamic DNS update
Protocol (RFC 2136) that integrates DNS with DHCP.
Dynamic Host Configuration Protocol
See
DHCP..
FQDN
Fully qualified domain name. Absolute domain name that unambiguously specifies a host's location in the DNS hierarchy.
fully qualified domain name
See
FQDN..
groups
Associative entity that combines administrators so that they can be assigned roles and constrained roles.
H
HINFO record
DNS Host Information resource record. Provides information about the hardware and software of the host machine.
hint server
See
root hint server.
host
Any network device with a TCP/IP network address.
host-admin,
host-admin-
readonly
Host administrator. Web UI base role that has unconstrained host administration privileges. There is a read/write and read-only variant to this role.
I
IEEE
Institute of Electrical and Electronics Engineers. Professional organization whose activities include developing communications and network standards.
in-addr.arpa
DNS address mapping domain with which you can index host addresses and names. The Internet can thereby convert IP addresses back to host names.
See also
reverse zone.
incremental zone transfer
See
IXFR.
IP address
Internet Protocol address. For example, 192.168.40.123.
IP history
Network Registrar tool that records the lease history of IP addresses in a database. query
Process by which a relay agent can request lease (and reservation) data directly from a DHCP server in addition to gleaning it from client/server transactions.
Lightweight Directory Access Protocol
See
LDAP.
local cluster
Location of the local Network Registrar CCM, DNS, DHCP, and TFTP servers.
See also
regional cluster.
localhost
Distinguished name referring to the name of the current machine. Localhost is useful for applications requiring a host name..
maximum client lead time
See
MCLT.
mail exchanger
Host that accepts electronic mail, some of which act as mail forwarders.
See also
MX record.
master name server
Authoritative DNS name server that transfers zone data to secondary servers through zone transfers.
MCD
Name of one of the Network Registrar internal databases. The other is CNRDB.
MCLT
Maximum client lead time. In DHCP failover, a type of lease insurance that controls how much ahead of the backup server's lease expiration the client's lease expiration should be.
MSO
Multiple Service Operator. Provides subscribers Internet access using cable or wireless technologies.
multinetting
State of having multiple DHCP scopes on one subnet or several LAN segments.
multithreading
Process of performing multiple server tasks.
MX record
DNS Mail Exchanger resource record. Specifies where mail for a domain name should be delivered. You can have multiple MX records for a single domain name, ranked in preference order.
N
NACK
Negative acknowledgment used in responding to a DHCP request.
namespace
All the nodes in a domain's large inverted tree, beginning at the root (.) domain. In a virtual private network, the informal name for the addresses contained in it.
NAPTR
DNS Naming Authority Pointer resource record. Helps with name resolution in a particular namespace and are processed to get to a resolution service. Based on proposed standard RFC 2915..
nrcmd
Network Registrar command line interface (CLI).
O
on-demand address pool
Wholesale IP address pool issued to a client (usually a VPN router or other provisioning device), from which it can draw for lease assignments. Also known as DHCP subnet allocation.
Organizationally Unique Identifier (OUI)
Assigned by the IEEE to identify the owner or ISP of a VPN.
See also
IEEE and
VPN.
P
ping
Packet Internetwork Groper. A common method for troubleshooting device accessibility that uses a series of Internet Control Message Protocol (ICMP) Echo messages to determine if a remote host is active or inactive, and the round-trip delay in communicating with the host.
policy
Group of DHCP attributes or options applied to a single scope or group of scopes.
primary master
DNS server from which a secondary server receive data through a zone transfer request.
PTR record
DNS Pointer resource record. Used to enable special names to point to some other location in the domain tree. Should refer to official (canonical) names and not aliases.
See also
in-addr.arpa.
R
RBE
See
routed bridge encapsulation..
regional cluster
Location of the regional Network Registrar CCM server.
See also
local cluster.
relay agent
Device that connects two or more networks or network systems. In DHCP, a router on a virtual private network that is the IP helper for the DHCP server.
Request for Comments
See
RFC.
DNS configuration record, such as SOA, NS, A, CNAME, HINFO, WKS, MX and PTR that comprises the data within a DNS zone. For more information, see
Appendix A, "Resource Records."
reverse zone
DNS zone that uses names as addresses to support address queries.
See also
in-addr.arpa.
RFC
Request for Comments. TCP/IP set of standards.
roles, constrained roles
Web UI administrators can be assigned one or more roles to determine what functionality they have in the application. A constrained role is a role constrained by further limitations. There are general roles for host, zone, address block, DHCP, and CCM database administration. You can further constrain roles for specific hosts and zones.
The
The mechanisms that help select DHCP scopes. They represent the selection tags on a DHCP server..
SOA record
DNS Start of Authority resource record. Designates the start of a zone.
SRV record
A server (SRV) record is a type of resource record that allows administrators to use several servers for a single domain, to move services from host to host with little difficulty, and to designate some hosts as primary servers for a service and others as backups.
stub resolver
DNS server that hands off queries to another server instead of performing the full resolution itself.
subnet allocation, DHCP
Network Registrar use of on-demand address pools for entire subnet allocation of IP addresses to provisioning devices.
subnet mask
A separate IP address, or part of the host IP address, that determines the part of the host IP address that is its
An attribute of the Network Registrar DNS server that by enabling it, the server checks the network address of the client before responding to a query.
subnetting
Action of dividing any network class into multiple subnetworks.
subzone
Partition of a delegated domain, represented as a child of the parent node. A subzone always ends with the name of its parent. For example, engineering.cisco.com. is a subzone of cisco.com.
subzone delegation
Dividing a zone into smaller pieces called subzones. You can delegate administrative authority for these subzones, and have them managed by people within those zones or served by separate servers.
supernet
Aggregation of IP network addresses advertised as a single classless network address.
T
TCP/IP
A suite of data communication protocols. Its name comes from two of the more important protocols in the suite: the Transmission Control Protocol (TCP) and the Internet Protocol (IP). It forms the basis of Internet traffic.
TFTP
Trivial File Transfer Protocol. Used to transfer files across the network using UDP.
See also
UDP.
Trivial File Transfer Protocol
See
TFTP.
U
UDP
User Datagram Protocol. Connectionless TCP/IP transport layer protocol.
Universal Time (UT)
International standard time reference that was formerly called Greenwich Mean Time, also called Universal Coordinated Time (UCT).
V
virtual channel identifier (VCI) path identifier (VPI)
See
virtual channel identifier (VCI).
virtual private network
See
VPN.
VPN
Virtual private network..
zone-admin,
zone-admin-
readonly
Zone administrator. Web UI base role that has unconstrained zone administration privileges. There is a read/write and read-only variant to this role. | http://www.cisco.com/c/en/us/td/docs/net_mgmt/network_registrar/6-1-1/user/guide/users/Usergls.html | CC-MAIN-2015-32 | refinedweb | 1,776 | 51.55 |
preview are two new views:
RecyclerView and
CardView. This post gives you an introduction to the
RecyclerView, it’s many internal classes and interfaces, how they interact and how you can use them.
Let me start with the good news: RecyclerView is part of the support library. So you can use it right now. Ok: You can use it as soon as the final support lib accompanying the L release gets released. So better to familiarize yourself with it right away 🙂
Sample project
The screenshots and the video at the end of the post show the sample project for this post in action. You can find the source of this sample at github. Keep in mind that the RecyclerView API is not yet finalized. Google might still change things that could break the sample when they release the final version of Android L.
What’s with this odd name? Why RecyclerView?
This is how Google describes
RecyclerView in the API documentation of the L preview release:
A flexible view for providing a limited window into a large data set.
So
RecyclerView is the appropriate view to use when you have multiple items of the same type and it’s very likely that your user’s device cannot present all of those items at once. Possible examples are contacts, customers, audio files and so on. The user has to scroll up and down to see more items and that’s when the recycling and reuse comes into play. As soon as a user scrolls a currently visible item out of view, this item’s view can be recycled and reused whenever a new item comes into view.
The following screenshots of the sample app illustrate this. On the left is the sample app after the initial start. When you scroll the view up, some views become eligible for recycling. The red area on the right screenshot, for example, highlights two invisible views. The recycler can now put these views into a list of candidates to be reused should a new view be necessary.
Recycling of views is a very useful approach. It saves CPU resources in that you do not have to inflate new views all the time and it saves memory in that it doesn’t keep plenty of invisible views around.
Now, you might say: That’s nothing new. And you’re right! We had that with
ListView for a very long time. The concept of recycling views itself it not new. But while you previously had a
ListView where the appearance, recycling and everything was tightly coupled, Google now follows a much better, a much more flexible approach with the new
RecyclerView. I really like the approach Google has taken here!
RecyclerView doesn’t care about visuals
Here’s the thing: While with
Listview we had tight coupling, Google now uses an approach where the
RecyclerView itself doesn’t care about visuals at all. It doesn’t care about placing the elements at the right place, it doesn’t care about separating any items and not about the look of each individual item either. To exaggerate a bit: All
RecyclerView does, is recycle stuff. Hence the name.
Anything that has to do with layout, drawing and so on, that is anything that has to do with how your data set is presented, is delegated to pluggable classes. That makes the new
RecyclerView API extremely flexible. You want another layout? Plug in another
LayoutManager. You want different animations? Plug in an
ItemAnimator. And so on.
Here’s the list of the most important classes that
RecyclerView makes use of to present the data. All these classes are inner classes of the RecyclerView:
In the next paragraphs I will briefly describe what each class or interface is about and how
RecyclerView uses it. In future posts I will revisit some of these classes, write about them in detail and show you how to customize them for your project’s needs.
ViewHolder
ViewHolders are basically caches of your
View objects. The Android team has been recommending using the ViewHolder pattern for a very long time, but they never actually enforced the use of it. Now with the new
Adapter you finally have to use this pattern.
It’s a bit weird that Google waited so long to enforce the usage of the ViewHolder pattern, but better late than never. If you do not know about the ViewHolder pattern, have a look at this Android training session. It uses the old
Adapter, but the pattern itself hasn’t changed.
Also searching for ViewHolder should yield plenty of hits to further blog posts. For example this post by Antoine Merle about ListView optimizations.
One thing that is specific to any
RecyclerView.ViewHolder subclass is that you can always access the root view of your
ViewHolder by accessing the public member
itemView. So there’s no need to store that within your
ViewHolder subclass.
And should you decide to override
toString() have a look at the base class. Its
toString() implementation prints some useful information you should consider to use for your log messages as well.
Here’s the code for the ViewHolder of the sample project. The ViewHolder is an inner class of the sample project’s Adapter:
public final static class ListItemViewHolder extends RecyclerView.ViewHolder {
TextView label;
TextView dateTime; public ListItemViewHolder(View itemView) {
super(itemView);
label = (TextView) itemView.findViewById(R.id.txt_label_item);
dateTime = (TextView) itemView.findViewById(R.id.txt_date_time);
}
}
RecyclerView.Adapter
Adapters fulfill two roles: They provide access to the underlying data set and they are responsible for creating the correct layout for individual items. Adapters always were part of Android and were used in many places.
ListView,
AutoCompleteTextView,
Spinner and more all made use of adapters. All those classes inherit from
AdapterView. But not so RecyclerView.
For the new
RecyclerView Google has decided to replace the old Adapter interface with a new
RecyclerView.Adapter base class. So say good bye to things like
SimpleCursorAdapter,
ArrayAdapter and the like. At least in their current incarnation.
Currently there is no default implementation of RecyclerView.Adapter available. Google might add some later on, but I wouldn’t bet on this. For Animations to work properly, cursors and arrays aren’t the best fit, so porting the current
Adapter implementations might not make too much sense.
Since
RecyclerView.Adapter is abstract you will have to implement these three methods:
public VH onCreateViewHolder(ViewGroup parent, int viewType)
public void onBindViewHolder(VH holder, int position)
public int getItemCount()
The
VH in the method signatures above is the generic type parameter. You specify the concrete type to use when you subclass the
RecyclerView.Adapter. You can see this in line 3 of the next code sample.
The most basic adapter for the sample layout looks like this:
public class RecyclerViewDemoAdapter extends
RecyclerView.Adapter
<RecyclerViewDemoAdapter.ListItemViewHolder> { private List<DemoModel> items; RecyclerViewDemoAdapter(List<DemoModel> modelData) {
if (modelData == null) {
throw new IllegalArgumentException(
"modelData must not be null");
}
this.items = modelData;
} @Override
public ListItemViewHolder onCreateViewHolder(
ViewGroup viewGroup, int viewType) {
View itemView = LayoutInflater.
from(viewGroup.getContext()).
inflate(R.layout.item_demo_01,
viewGroup,
false);
return new ListItemViewHolder(itemView);
} @Override
public void onBindViewHolder(
ListItemViewHolder viewHolder, int position) {
DemoModel model = items.get(position);
viewHolder.label.setText(model.label);
String dateStr = DateUtils.formatDateTime(
viewHolder.label.getContext(),
model.dateTime.getTime(),
DateUtils.FORMAT_ABBREV_ALL);
viewHolder.dateTime.setText(dateStr);
} @Override
public int getItemCount() {
return items.size();
} public final static class ListItemViewHolder
extends RecyclerView.ViewHolder {
// … shown above in the ViewHolder section
}
}
RecyclerView.LayoutManager
The
LayoutManager is probably the most interesting part of the
RecyclerView. This class is responsible for the layout of all child views. There is one default implementation available:
LinearLayoutManager which you can use for vertical as well as horizontal lists.
You have to set a
LayoutManager for your
RecyclerView otherwise you will see an exception at Runtime:
08-01 05:00:00.000 2453 2453 E AndroidRuntime: java.lang.NullPointerException: Attempt to invoke virtual method ‘void android.support.v7.widget.RecyclerView$LayoutManager.onMeasure(android.support.v7.widget.RecyclerView$Recycler, android.support.v7.widget.RecyclerView$State, int, int)’ on a null object reference
08-01 05:00:00.000 2453 2453 E AndroidRuntime: at android.support.v7.widget.RecyclerView.onMeasure(RecyclerView.java:1310)
Only one method of
LayoutManager is currently abstract:
public LayoutParams generateDefaultLayoutParams()
But there is another one where the code states that you should overrride it since it’s soon going to be abstract:
public void scrollToPosition(int position) {
if (DEBUG) {
Log.e(TAG, "You MUST implement scrollToPosition. It will soon become abstract");
}
}
That’s very weird! Why not make it abstract right away? Anyway: Better you override this one to be on the safe side for when Google releases the final version of L.
But only overriding those two methods won’t get you very far. After all the
LayoutManager is responsible for positioning the items you want to display. Thus you have to override
onLayoutChildren() as well.
This method also contains a log statement stating “You must override onLayoutChildren(Recycler recycler, State state)”. Ok, then make it abstract 🙂 Luckily there’s still plenty (?) of time to change that into a proper abstract method for the final release of L. We all make mistakes. After all, my “Stupid stuff devs make” series is all about blunders that I made. So don’t get me wrong. No hard feelings here!
LinearlayoutManager
The
LinearLayoutManager is currently the only default implementation of
LayoutManager. You can use this class to create either vertical or horizontal lists.
The implementation of
LinearLayoutManager is rather complex and I only had a look at some key aspects. I will return to this implementation in my post about custom
LayoutManagers.
To use the
LinearLayoutManager you simply have to instantiate it, tell it which orientation to use and you are done:
LinearLayoutManager layoutManager = new LinearLayoutManager(context);
layoutManager.setOrientation(LinearLayoutManager.VERTICAL);
layoutManager.scrollToPosition(currPos);
recyclerView.setLayoutManager(layoutManager);
LinearLayoutManager also offers some methods to find out about the first and last items currently on screen:
findFirstVisibleItemPosition()
findFirstCompletelyVisibleItemPosition()
findLastVisibleItemPosition()
findLastCompletelyVisibleItemPosition()
Surprisingly these methods are not part of the source code in the SDK folder, but you can use them as they are part of the binaries. As I cannot imagine those being removed, I’m sure you’ll find these in the final L release as well.
Other methods help you get the orientation of the layout or the current scroll state. Others will compute the scroll offset. And finally you can reverse the ordering of the items.
Since I’m going to write an extra post about
LayoutManagers this should suffice for now.
RecyclerView.ItemDecoration
With an
ItemDecoration you can add an offset to each item and modify the item so that items are separated from each other, highlighted or, well, decorated.
You do not have to use an
ItemDecoration. If, for example, you use a
CardView for each item, there’s no need for an ItemDecoration.
On the other hand you can add as many
ItemDecorations as you like. The
RecyclerView simply iterates over all
ItemDecorations and calls the respective drawing methods for each of them in the order of the decoration chain.
The abstract base class contains these three methods:
public void onDraw(Canvas c, RecyclerView parent)
public void onDrawOver(Canvas c, RecyclerView parent)
public void getItemOffsets(Rect outRect, int itemPosition, RecyclerView parent)
Anything you paint in
onDraw() might be hidden by the content of the item views but anything that you paint in
onDrawOver() is drawn on top of the items. If you simply create a bigger offset and, for example, use this offset to paint dividers, this of course is of no importance. But if you really want to add decorations, you have to use
onDrawOver().
The
LayoutManager calls the
getItemOffset() method during the measurement phase to calculate the correct size of each item’s views. The
outRect parameter might look a bit odd at first. Why not use a return value instead? But it really makes a lot of sense, since this allows
RecyclerView to reuse one
Rect object for all children and thus save resources. Not necessarily nice — but efficient.
One thing I didn’t expect considering the name of the class is that the
onDraw()/
onDrawOver() methods are not called for each item, but just once for every draw operation of the
RecyclerView. You have to iterate over all child views of the
RecyclerView yourself.
I will explain this in more detail in a follow-up post about writing your own
ItemDecorations.
RecyclerView.ItemAnimator
The
ItemAnimator class helps the
RecyclerView with animating individual items.
ItemAnimators deal with three events:
- An item gets added to the data set
- An item gets removed from the data set
- An item moves as a result of one or more of the previous two operations
Luckily there exists a default implementation aptly named DefaultItemAnimator. If you do not set a custom
ItemAnimator,
RecyclerView uses an instance of
DefaultItemAnimator.
Obviously for animations to work, Android needs to know about changes to the dataset. For this Android needs the support of your adapter. In earlier versions of Android you would call
notifyDataSetChanged() whenever changes occured, this is no longer appropriate. This method triggers a complete redraw of all (visible) children at once without any animation. To see animations you have to use more specific methods.
The RecyclerView.Adapter class contains plenty of
notifyXyz() methods. The two most specific are:
public final void notifyItemInserted(int position)
public final void notifyItemRemoved(int position)
The following video shows the result of an addition as well as a removal of an item in the sample app:
A short video showing the default animations for the removal and addition of elements
Listeners
RecyclerView also offers some rather generic listeners. Once again you can safely forget everything you used to use up to now. There is no
OnItemClickListener or
OnItemLongClickListener. But you can use an
RecyclerView.OnItemTouchListener in combination with gesture detection to identify those events. A bit more work and more code to achieve the same result. I still hope for Google to add those Listeners in the final release. But whether those Listeners will be added is as an open question.
Combining all classes
You combine the classes either in a fragment or an activity. For the sake of simplicity my sample app uses activities only.
First of all here’s the layout file containing the <android.support.v7.widget.RecyclerView
android:id="@+id/recyclerView"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity"
tools: <ImageButton
android:id="@+id/fab_add"
android:layout_alignParentRight="true"
android:layout_alignParentBottom="true"
android:layout_width="@dimen/fab_size"
android:layout_height="@dimen/fab_size"
android:layout_gravity="bottom|right"
android:layout_marginRight="16dp"
android:layout_marginBottom="16dp"
android:background="@drawable/ripple"
android:stateListAnimator="@anim/anim"
android:src="@drawable/ic_action_add"
android:
</RelativeLayout>
As you can see, nothing special here. You do not define the orientation or stuff like that on the
RecyclerView. Actually
RecyclerView itself makes no use of the attributes, it passes them on to the parent (which is
ViewGroup) and that’s it.
There is one place within RecyclerView where an
AttributeSet is used and that is in the
generateLayoutParams() method:
@Override
public ViewGroup.LayoutParams generateLayoutParams(AttributeSet attrs) {
if (mLayout == null) {
throw new IllegalStateException("RecyclerView has no LayoutManager");
}
return mLayout.generateLayoutParams(getContext(), attrs);
}
In this snippet the
RecyclerView passes the
AttributeSet on to the
LayoutManager.
The Java code is also pretty simple:
setContentView(R.layout.activity_recyclerview_demo);
recyclerView = (RecyclerView) findViewById(R.id.recyclerView); LinearLayoutManager layoutManager = new LinearLayoutManager(this);
layoutManager.setOrientation(LinearLayoutManager.VERTICAL);
layoutManager.scrollToPosition(0);
recyclerView.setLayoutManager(layoutManager); // allows for optimizations if all item views are of the same size:
recyclerView.setHasFixedSize(true); // For the sake of simplicity I misused the Application subclass as a DAO
List<DemoModel> items = RecyclerViewDemoApp.getDemoData();
adapter = new RecyclerViewDemoAdapter(items);
recyclerView.setAdapter(adapter); RecyclerView.ItemDecoration itemDecoration =
new DividerItemDecoration(this, DividerItemDecoration.VERTICAL_LIST);
recyclerView.addItemDecoration(itemDecoration); // this is the default;
// this call is actually only necessary with custom ItemAnimators
recyclerView.setItemAnimator(new DefaultItemAnimator()); // onClickDetection is done in this Activity’s OnItemTouchListener
// with the help of a GestureDetector;
// Tip by Ian Lake on G+ in a comment to this post:
//
recyclerView.addOnItemTouchListener(this);
gesturedetector =
new GestureDetectorCompat(this, new RecyclerViewDemoOnGestureListener());
Connecting all those elements together roughly consists of these steps:
- Get a reference to your
RecyclerView
- Create a
LayoutManagerand add it
- Create an
Adapterand add it
- Create zero or more
ItemDecorationsas needed and add them
- Create an
ItemAnimatorif needed and add it
- Create zero or more listeners as needed and add them
All in all about 30 lines of code.
Now of course this is misleading. That’s only the glue code. The really interesting stuff is in
RecyclerView's many inner classes which you can subclass and tweak to your needs. That’s where the real work is done.
But the separation of concerns Google created helps you stick to one task within one implementation and it should make reuse easier to achieve. That’s why I like
RecyclerView and its ecosystem. I’m not afraid to criticize big G, but that’s well done, Google!
Gradle integration
To use RecyclerView you have to add it to your gradle file. Adding the support library alone is not enough:
dependencies {
//…
compile ‘com.android.support:recyclerview-v7:+’
//…
}
Is that the final API?
Of course I do not know if the concrete implementations that the preview contains will be in the final release of Android L. But I guess so. And I expect some additions as well as minor changes to the API, based on bug reports and developer feedback.
Google itself gives one hint in the current API documentation about more stuff to come. The documentation for the
RecyclerView.LayoutManager class contains this nugget:
Several stock layout managers are provided for general use.
So we can expect more LayoutManagers. Which, of course, is good. Furthermore I expect at least one default ItemDecoration as well. After all the support library’s sample project contains a
DividerItemDecoration, which works well with the
LinearLayoutManager.
I’m more skeptical about adapters. While an
ArrayAdapter (or better yet,
ListAdapter) is very well possible, I am more doubtful about a
CursorAdapter since cursors do not lend themself easily to the new addition and removal notifications within the
Adapter.
Lucas Rocha’s TwoWayView to simplify your life
I strongly recommend to have a look at Lucas Rocha’s TwoWayView project. He has updated his project to work with
RecyclerView and has done a great deal to make using
RecyclerView a lot easier. For many use cases the default layouts he provides should suffice. And he also provides support for custom
LayoutManagers. Which are simpler to write using his framework than with the base
RecyclerView.
Take a look at his project and check out if it covers all you need. Using it helps you get rid of some of
RecyclerView’s complexity.
For more information about his project see his blog post about how TwoWayView extends and simplifies RecyclerView.
To learn about news about this project follow Lucas Rocha on Google plus or Twitter.
I will cover TwoWayView in this series as well – so stay tuned 🙂
Report bugs!
We currently have a developer preview. The first time Google does this for Android. Really nice. But of course this preview is not free of bugs. To help us all get a more stable final release, give feedback or issue bug reports, if you encounter anything that bothers you or is a bug. There is a special L preview issue tracker.
And that’s it for today
I started this post as part of the preparation for my talk at the July meetup of the Dutch Android User Group.
I had much fun presenting about this topic at the July meetup of the Dutch Android User Group. And I had fun digging into this topic – and still have. Reading the source of RecyclerView and its many inner classes is really interesting. Thanks to the organizers for giving me the opportunity to speak about this topic and for forcing me to dig into this stuff 🙂
At Utrecht I had 15-20 minutes for my talk. How I managed to get it done in time is still a mystery to me. As you can see there’s a lot to talk about – and this post is only the beginning.
Until next time!
67 thoughts on “A First Glance at Android’s RecyclerView”
Can this already be used in production code? Since the support lib is bundled with the app changes to it shouldn’t break anything?
Even if it’s possible (I haven’t tried), I would not do so. Just start a branch where you use the new stuff and fix bugs when the final L release gets published. The API is bound to change at places where it doesn’t suit effective development.
The source code is bundled with the sdk (just open the recycler*source.jar). You can see that there are various //TODO in the multiple classes of recyclerview.
Not to mention that some things are entirely absent (no gridlayoutmanager, no headers, …) and we don’t even know if they will be added for the release.
For gridviews and even more staggered gridviews, I would not use recyclerview before release. Even though there are some communities made libraries to do so, using a preview API with a library that has never been tested on a real life large scale is asking for a disaster.
For a ListView replacement, if you are writing a new component from scratch that needs some advanced features that recyclerview offers, it might a viable solution if :
-writing it on top of ListView would need 1000 lines of unmaintanable code that would be very costly to migrate to RecyclerView.
-you are ready to delve into the full recyclerview code & make the appropriate corrections.
Interesting. I only knew of this location within the SDK folder: ./sources/android-20/android/support/v7/widget/
But since all files in the jar are bigger, those probably are newer. Furthermore they contain the four methods
findFirstVisibleItemPosition()and so on that I missed in the sources. Thanks for pointing this out!
Is it me or I cannot find the project demo with this excellent talk? Thanks!
You’re right. I’m too knackered to upload it to github today. I will add it tomorrow. Sorry!
No problema at all. I said because it is weird not upload the project =D. Thanks!
Forgot about it again. It’s now on github.
I don’t understand;It is too difficult for me.
Thanks for this article! Really exhaustive!
Can ItemAnimator be used for animating an item’s height (ie expand/collapse an item)?
If you want to scale it out of view on removal and scale it into view on addition, then, yes.
ItemAnimatoris only for animations based on addition, removal or for animations to fill up empty space after removal or to make place before addition. For these situations, though, you should be able to do anything. It’s not useful for anything else.
I have been able to use RecyclerView in conjonction with LayoutTransition in order to make expand animations. It is a bit of a hack though, it will not necessarily work with the release version of RecyclerView.
I can post the code somewhere if you want, but basically :
-I have two views types, NORMAL & EXTENDED
-the onCreateViewHolder method always inflates the same layout. It contains the retracted layout and a ViewStub that I inflate if I need to extend this cell.
-the model keeps a track of whether the view should be expended or not, that way RecyclerView can reserve the right amount of space during scrolling with onBindViewHolder.
-on Click on the cell, I activate LayoutTransition.CHANGING on it (and I deactivate it in onBindViewHolder to avoid some glitches), I change the item type to EXTENDED and I pass the visibility of the extended part of the layout from GONE to VISIBLE.
It is not perfect though, RecyclerView does not always animates the move of the next cells.
Seems like a little bit of a hack though. Does it still work for in production version of RecyclerView??
I explicitely wrote that it is a bit of a hack 🙂 .
It is a feature of the platform, but it has not been written for scrolling containers.
I have not tested it with the release version of RV, outside of the technical side it was an awful design and the product team came to its senses and we replaced it.
Can I download the above example somewhere? I’m especially interested in the add-button. How do the drawables for the imagebutton and the background look like? What does the statelistanimator?
I’m just resting on other’s shoulders. In this case Gabriele Mariotti’s gist:
Yes. It’s on github.
You should try out my version at parchment.mobi . I’d love to hear how they compare.
As the author of the parchment library you’re in a much better position to compare both approaches than I am. Especially since my time is too limited to do that.
Thank you, this is a very good first look at this API!
About your skepticism about the future availability of a
CursorAdapter… Well I certainly hope it will happen! It would be very strange if it didn’t. I mean most of the apps I’ve been working on have their list populated with data coming from a
ContentProvider, and use
CursorLoaders. I really think this should still be possible to do with the new API – and if not, then I would be very surprised, and upset 🙂
Benoît, it is of course possible to do a
CursorAdapterthe same way we did before (more or less at least). The problem is: Will this be enough? Don’t we want proper animations for removing items, changing items and the like? And that’s where I think
Loadersand stuff like
swapCursor()do not work anymore. From
ContentObservers(the basis of Loaders) we do not get the information as to what items have changed. But you need this information to call
notifyItemRemoved(),
notifiyItemAdded()or
notifyItemChanged(). I still do not have a solution for this. The only one I can think of is comparing data sets. But that can be very costly depending on the size of the set.
It might be more appropriate to use in-app messaging (Otto, EventBus) to propagate dataset changes and to trigger animations.
It’s surprising that ItemDecoration does not have information about current position or viewholder. How to customize ItemDecoration for different item ? say have a rounder color background only for first and last items seems impossible
You have access to the
Viewobject (because you have to iterate over all views). With the
idof the view, you have all the information you need:
[java]
if (view.getId() == R.id.header) {
view.setBackground(fancyHeaderColor);
} else {
view.setBackground(boringItemStatelistDrawable);
}
[/java]
How to implement lazyloading in RecyclerView?
Hi Wolfram,
Your Adapter sample has a mistake in line 23 where you instantiate your ViewHolder – you pass the viewType, but the ViewHolder doesn’t have a parameter for the type.
The current version of the released RecyclerView (as of writing this comment) will set `mItemViewType` itself after calling the implemented version of `onCreateViewHolder(ViewGroup, int)` (`RecyclerView#createViewHolder(ViewGroup, int)`), which allows you to call `ViewHolder#getItemViewType()`.
Not sure I like ViewHolder enforcement – we have tended to use the HolderView pattern over the last year () and if we continue to use it, then it just means the ViewHolder is going to be empty. Maybe it’s better to have dumber views, MVP style, but I have grown to like it, and see adv/disadv of each side.
This code was based on the L preview. Haven’t rechecked the sample code afterwards. As soon as I have time, I will do so and release a new version of the sample app and an updated version of this post. For now I’m just going to add a short text pointing to your comment 🙂
BTW: Thanks!
I also think the ViewHolder enforcement is odd:
It’s very late: Why not try to establish this as a needed pattern earlier?
The community now found interesting alternatives to it – as you mention.
But I guess Google has a better insight into what goes on in the wild – and probably has chosen to do so because too many didn’t use any proper solution.
Whoa. Stupid response by me 🙂
Fixed! Obviously I didn’t write both parts of the post as one fluid text but at different states of my project.
Hi, very nice tutorial.
I have one question, How could I animate selected item and scroll it to center, not clicked or long clicked.
One way I found is to animate item in “onFocusChangedListner”, and I get position of focused item and then animate it and scroll it to centre, but I don’t think its a proper way of doing this.
Any suggestions ?
So, the official support library is here yet and Android 5.0 nearly… but I still can’t find how to use RecyclerView with a Cursor? Is there any example, best practice or library?
Sorry, Marcus, but I still haven’t seen anything about this. I’m curious myself how to best solve this issue.
My current line of thought goes something like this: Any change not done by the user (e.g. caused by new data as a result of a sync): Ignore this is the UI – unless the user reloads the screen (e.g. comes back from another app to revisit where she left off). Any change done by the user: Manually trigger those changes in the adapter to get animations. This basically means to no longer use Loaders. Since I do like them, I’m not sure this is a good proposal 🙂
That’s what I am going to try for an upcoming app anyway. It indeed doesn’t use Loaders. Not sure, though, how this will work out.
Currently I’m developing a wear app only for private use, so I don’t really care about the animations. But I’m not sure if it is the right way to develop my own CursorAdapter which was part of the SDK ever…
Also it feels very wrong to not use loaders anymore. And ignoring background updates of data can’t be the right way either…
If you don’t care, why not do it as always – and just call
notifiyDatasetChanged()whenever the
onLoadFinished()method gets called?
About ignoring background updates: I suggest to only do so while your ap is currently on the screen. If the user switches the app, goes to the home screen or whatever, any changes should be reflected when the user returns to the app. The user won’t know about any backend changes anyway. If there are frequent changes and the user indeed does expect them, you could show some kind of clickable “New data available. Do you want to refresh this list?” information on the screen.
There ARE cases where background model updates should be reflected in real time in the UI.
I have no solution yet neither, but ignoring sync or gcm based updates is not the right direction to me.
As stated in the docs, notifiyDatasetChanged should only be done as a last solution.
Preferred notifyItem* methods should be used as much as possible, but how can we call those methods from a service (gcm or syncadapter case), which cannot hold a reference to the adapter?
F.
Might depend on the situation and the amount of data involved. One could use either a
LocalBroadcastManageror – IMHO better – an eventbus to propagate data changes.
You could – say – post an event with a list of ids for all changed elements at the end of a sync. Even better three lists: deletedElements, newElements, changedElements. Than register to listen to these events in your fragment to call the appropriate methods on the adapter. Right now that would be my preferred method. Still not sure about the best way – but that’s my current line of thought for stuff where an immediate reflection of changes is appropriate.
I agree. As a replacement for a Bus, you could insert modifications events in a table (three types of event: insert, delete, update), which your adapter or the fragment registers as an observer (maybe with a good old cursor loader). Once queried and the corresponding
notifyItem*method called on the adapter, the table can be flushed (or the event marked as treated).
I really look forward for an android sample of recyclerview live updating!
F.
I’ve just created a draft as a reminder to post something about this 🙂 But given my workload that’s not going to happen soon.
I am getting LayoutManager already attached to a recyclerView error.
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
adapter=new BuzzFeedCardAdapter(buzzFeedlist,getActivity());
try {
View v=inflater.inflate(R.layout.buzz_feed_recycler_view, container, false);
recycleView=(RecyclerView) v.findViewById(R.id.cardList);
recycleView.setHasFixedSize(true);
recycleView.setLayoutManager(layoutManager);
recycleView.setAdapter(adapter);
recycleView.setItemAnimator(new DefaultItemAnimator());
return v;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
I am getting this error when i navigate back to this fragment from other fragments. Any help is highly appreciated.
Hi, I have a same problem with you.
if you got solutions for that kinds of problem, could you send me a brief solution via mail?
Nice post, thanks.
I found this post on item animator
hope it is useful
I have a question , If I wanna realize comlax ListView . Every item is different. Then how can I use the RecyclerView ?
Not sure, I understand you correctly. If really every item is different, then you shouldn’t use a
RecyclerView. The
RecyclerViewis only useful if there are items to be recycled, that is if there are repeating elements (no matter if those items repeat one after the other or irregularly).
If you just have a lot of different items you have to override getItemViewType(). And keep these types in mind when dealing with
bindViewHolder()and
createViewHolder().
Regarding click listeners. A nice way is implement View.OnClickListener (or OnLongClickListener) at your ViewHolder and attach it as listener to corresponding View in RecycleView.Adapter.onBindViewHolder(…). Then it is possible to deliver click to upward Adapter (or Adapter.Listener if it is declared and provided).
Hi Wolfraw, 1 to display information only on that button and the button 2 to give click to show only the information that button, I appreciate all the information you can give me, thank you.
All you need to do is provide your adapter with the correct data. That way you only need one adapter and one recyclerview (if the views of both really are the same).
Wolfraw Hi, thank you very much for your cooperation, I have the following question: when you say I should put the right data, which are the correct data?
I send you pictures of my first activity java: which is:
And this is the adapter that’m driving:
What is the data you say that I place or change, I appreciate all your help.
I don’t know since it’s your app which data belongs where. But the
listclientesis the one you have to change depending on which button is pressed. If it’s a complete different list, simply replace it and call
notifyDatasetChanged()within your adapter. If it’s only slightly different it’s probably better to add/remove/change those few items and use more specific
notifyXyz()calls.
If you upload code, BTW, it’s much better to actually paste the source code in text format somewhere (e.g. using Github’s gist) than talking screenshots of the code. Simpler for you, simpler for all those that might want to help.
thanks for your help, as you say I attached the project on github, I hope you can help me because I can not find the way. Thanks and stay tuned to your answer.
I think this has nothing to do with the RecyclerView. It’s basic event handling and Intent usage. Have a look at the official Android trainings by Google. This deals with starting a new Activity based on some action in the first one.
Hi Wolfram: thanks for the info you offer me, I have it tried to do as I indicas but to give in the first button I get all the information from the list and give the second button I get all the information also and the idea as I said is that each of the buttons show me only the information of that button, that is the first solo show me the information of the laboratories and the second only show me information clinics. I appreciate your cooperation and support.
At some point you have to filter this list. Be it when querying a database or however you get to the list (I think it’s the server’s JSON response in your case – or it could be calling the right endpoint on your server). Whatever it is for your use case, it is nothing specific to Android. It’s most likely something you can solve with plain Java list handling.
Only take care to do that before adding the list to the adapter.
Any idea on how to implement a snappy horizontal recyclerView with CardViews that snap the item on focus on the center every time ?
This solution () looks pretty buggy. 🙁 And I haven’t found any other solution.
i am getting java null pointer exception.same code.any help?
Thank you very much for sharing your knowledge with everyone, could you help me I need to place a spinner in a recyclerview, you have some tutorial or link in which to do this process, I appreciate all your help.
How to access the invisible items in the recyclerview? please help me out….
Not sure which invisible items you are referring to. If you are talking about views, that are set to
View.GONE, it’s not different than with any other view. You have to access them via a ViewHolder (and use
findViewById()there).
I’m mentioning about the list item which is scrolled out of the recyclerview. so I’m not able to access the item’s values while accessing with the recyclerview.getChildAt();
Let me give a link in which I have posted my problem in brief.
See my answer on SO. You have to change the data you use in your Adapter (and call notifyItemChanged for all changed items). Not all items of your dataset are visible. That’s the point of the RecyclerView: It only creates the minimum amount of views and recycles those scrolled off the screen. Thus the children of the RecyclerView should only be modified from within your Adpter.
Fine thank you Rittmeyer.
If it helped you, please upvote it. If not, comment on it to describe your problem in more detail.
Now I can able to take the visible child ie inside the recyclerview preview height the displayed list items via adapter. Now also I cannot take the invisible child of the recyclerview. So please help me out.
This same comment I’ve raised in that stack overflow page where I’ve raised my question.
Please help me out Mr Rittmeyer.
Now I got this issue how to solve this?
Nice article. I could understand about recyclerview. Thanks | http://www.grokkingandroid.com/first-glance-androids-recyclerview/ | CC-MAIN-2017-26 | refinedweb | 6,551 | 56.45 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.