text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
I am wondering whether I can use the power scripts to save me some significant manual editing/setup. I'll go into my particular scenario below, but I think the utility of doing this extends to a wide range of "edit time" utility scripts that basically function as macros, doing what you would otherwise have to tediously do by hand.
The question is not so much how to write these scripts, that is straightforward. But how to trigger them from the editor (i.e. only when I want to explicitly do so)
My scenario is: I am using unityCar to do a drving simulation. I'd like to duplicate cars but give them different paint jobs (green, yellow, etc.) Now I could go in after selecting and duplicating my original and modify each relevant material of the mesh renderers in the clone. But I'd much rather just have a script do that once. After that the color of the car is changed and I don't have to call the script again (unless I want to change it again in future)
Setting aside this particular use, I think there are probably many other applications of this type of utility script or almost a editing "macro" for various purposes.
Questions:
how would I trigger execution of such a script from the editor ? I am aware of [the [ExecuteInEditMode] directive][1], which allows scripts to be run in Edit mode also, but they trigger automatically on Update, OnGUI, OnRenderObject which is not really whats called for here. I'd want an explicit, manual trigger for the script.
whats a good place to attach the script to (I'd assumed for simplicity to the car prefab itself so the object the script applies to is already implied, but I guess the script itself is really a "global static" - no big deal either way, just wanted an opinion if theres a good reason to do it one way vs the other
[1]:
Answer by VivienS
·
Jul 27, 2012 at 07:58 AM
Hi there.
I came across this neat little script a while ago and it saved me a lot of time since! (It is also on UnifyCommunity, but the page is offline, as I'm posting this):
It uses the undocumented MacroEvaluator class of Unity. Works in 3.4.x and 3.5.x though.
From what I understand, MacroEvaluator takes only JavaScript code.
Cheers, Vivien
Here's the code, btw., as it seems the link is not working! (source is of course the above link.)
using UnityEditor;
using UnityEditor.Macros;
using UnityEngine;
/// <summary>
/// Allows use of the undocumented MacroEvaluator class. Use at your own risk.
/// </summary>
public class Macros : EditorWindow
{
string macro = "";
/// <summary>
/// Adds a menu named "Macros" to the Window menu.
/// </summary>
[MenuItem ("Window/Macros")]
static void Init ()
{
CreateInstance<Macros>().ShowUtility();
}
void OnGUI ()
{
macro = EditorGUILayout.TextArea(macro, GUILayout.ExpandHeight(true));
if (GUILayout.Button("Execute")) {
MacroEvaluator.Eval.
Run script in Edit mode when paused?
3
Answers
What Does "ExecuteInEditMode" Actually Do?
1
Answer
Write a Editor to select file from disk and import to unity
1
Answer
Triggers not working - What's wrong with this script?
2
Answers
UniSciTE in version 3.x Script Edtior
1
Answer
|
https://answers.unity.com/questions/288648/triggering-a-script-from-the-editor-utility-macro.html
|
CC-MAIN-2019-43
|
refinedweb
| 536
| 64.3
|
Search: Search took 0.02 seconds.
- 11 May 2009 3:32 PM
Hey sregg,
Nope I didn't :-(
I had a bit of a go but the ComboBox depends too much on list stuff and you end up copying almost all of the code.
Let us know if you have a successful attempt.
...
- 7 May 2009 3:50 PM
OK, so closing a hidden window causes a leak too.
- 7 May 2009 3:33 PM
Thanks!
Also if I hide() a window then close() it, it does not get unregistered from the WindowManager as close immediately exits if the window is hidden and therefore leaks.
Showing a hidden...
- 6 May 2009 3:40 PM
I have discovered a memory leak when closing a window. My profiler shows that the eventPreview is the cause.
Adding to the close method:
if (eventPreview != null) {
...
- 28 Jan 2009 6:46 PM
- Replies
- 8
- Views
- 4,355
Hmm good point. Sorry about that. Shall I repost it there?
- 28 Jan 2009 5:06 PM
- Replies
- 8
- Views
- 4,355
When entering text into a MessageBox.prompt the enter key should close the message box. I think this should be built into to the toolkit but I can't seem to get this to work even if I put in my own...
- 14 Jan 2009 8:57 PM
- Replies
- 0
- Views
- 846
The SummaryType method:
public abstract double render(Object v, ModelData m, String field, Map<String, Object> data)should return an Object.
This allows for non-double based summaries like...
- 11 Jan 2009 5:14 PM
- Replies
- 2
- Views
- 2,066
Radio.setBoxLabel after render does not change the label. Looking at the code the label is only setup during the onRender.
A simple workaround:
public class RadioEx extends Radio {
...
- 8 Jan 2009 2:30 PM
Can any one help me here?
- 4 Jan 2009 8:46 PM
- Replies
- 13
- Views
- 5,245
Ahh oops :"> I confused 1.2 with 2.0
- 4 Jan 2009 4:57 PM
Did "scalable" buttons make it into 2.0?
- 4 Jan 2009 4:30 PM
Hi, I'm trying to find an example of a ComboBoxTree in GXT. Basically I want what is discussed in this ExtJS thread:
Does any one know of one?...
- 4 Dec 2008 3:11 PM
Yay can't wait :)
- 1 Oct 2008 6:36 PM
Ok thanks for your replies.
Can I please request this feature be considered.
- 1 Oct 2008 4:58 PM
TextToolItem seems to have the same problem with Button in that the background image is not stretched to the required height.
- 30 Sep 2008 5:19 PM
Hi I'm trying to create a button with mixed small and large text. The text layout seems to work fine but the background button image is not tall enough and some of the bottom of btn-sprite.gif is...
- 25 Sep 2008 5:22 AM
- Replies
- 2
- Views
- 1,405
com.extjs.gxt.ui.client.widget.form.Field.setEmptyText(String)
I'm using GXT 1.1
...
- 24 Sep 2008 3:34 PM
- Replies
- 1
- Views
- 2,374
It would be nice if I could go:
BeanModelFactory<Foo> factory = BeanModelLookup.get().getFactory(Foo.class);
BeanModel<Foo> fooModel = factory.createModel(foo);
Foo foo = fooModel.getBean();
...
- 24 Sep 2008 3:27 PM
- Replies
- 2
- Views
- 1,011
Cool thanks!
- 23 Sep 2008 9:03 PM
- Replies
- 2
- Views
- 1,405
Hey,
If I want to change the empty text of a field dynamically if I go field.setEmptyText(text) it is not immediately applied. If I go:
field.setEmptyText(text);
if (field.getValue() == null)...
- 23 Sep 2008 8:42 PM
- Replies
- 2
- Views
- 1,011
Hey,
I'm using GXT 1.1 and have found LabelField.validate() causes a NullPointerException because it does not have a "raw value". It should probably override getRawValue() to return it's "text"...
- 22 Sep 2008 11:51 PM
- Replies
- 5
- Views
- 5,500
multiField.setSpacing(10) works for me
- 19 May 2008 6:23 AM
- Replies
- 1
- Views
- 1,435
I'm trying to workout how I can create an initially collapsed split border and the programatically uncollapse the split border and position it to a particular position.
Even if the split was...
Results 1 to 23 of 23
|
http://www.sencha.com/forum/search.php?s=c5f813e5198f0d3d69ec790d5bb8c136&searchid=10264867
|
CC-MAIN-2015-11
|
refinedweb
| 711
| 75.91
|
-
-
Other urls found in this thread:
youtube.com
youtube.com
osu.ppy.sh
osu.ppy.sh
osu.ppy.sh
puu.sh
osu.ppy.sh
osu.ppy.sh
mimicp.tumblr.com
youtube.com
theworstlabel.bandcamp.com
youtube.com
So, there's no sources, got it.
[whateverspec'spostwas]
I need to leave here in 8 hours
Ah, you'll miss all the good and important stuff then. rip in peas.
I'll fill you in tomorrow.
stay hydrogenated
It's not like there's going to be another MW5 announcement or something, just completely out of left field like that. They can't just pull a surprise like that out twice. And beyond that it was mostly just fun for mocking piggy's incompetance
Hu go be a dog somewhere else
mainly an excuse for me to go fap but okay
okay
ew thats nasty
and you're a "girl". the world is round.
I want them to though. I want them to pull something so far out of left field again that no one will see coming.
They will show off MW5, that we know. And HBS will show off BT. There's gotta be something big though. There has to be.
Oh also, yfw Piggy is better at live streams than Christ Roberts.
Roberts has been btfo so many times by shitty streams now that he won't even do them anymore.
what
Sibbi, did you see? I got in trouble for shitposting in TESO zone chat.
wow, just accepting being a dog
There's nothing that could be half as big as MW5. The franchise just isn't big enough to have multiple big vidya in development, definitely not while piggy owns the rights at the very fucking least.
Might be a new tabletop thing or something, I think they still haven't come through with their storyline advancement they promised. Which could tie in with that new what the literal fuck mech for MWO we saw, but it's not like MWO can just jump to a new time period THAT far ahead without any of the shit inbetween.
Phone game?
Card game?
MWO getting the graphical overhaul it finally needs by updating or outright upgrading the engine?
SOLARIS?
can't just jump the timeline again
WE JIHAD NOW
I did not! please share!
I gueeeeeeeeeess? Does any of that really matter though?
Really though, the engine isn't evne close to the problem with MWO visually it's purely in the assets.
Yeah but I mean you couldn't jump to whatever new storyline the tabletop shit takes.
Greetings,
This account has been reported for a Terms of Service violation and is receiving an official warning – Inappropriate Chat:
ZONE Eir Taltoren I'm high, who's fucking me?
ZONE Eir Taltoren 12/f/cali
ZONE Eir Taltoren I'm not a child !
ZONE Eir Taltoren I'm 12 already !
Please take a few moments to review the Terms of Service and Code of Conduct which must be acknowledged to participate in The Elder Scrolls Online: Tamriel Unlimited service. Further violations of any part of our Terms of Service may result in a temporary suspension or even a permanent interruption of your account.
The first message was distinct and unrelated to the last three.
w o w
was it just a warning or?
Well... no. But it sounds like desperate shit Russ would try if nothing else.
I suppose, but some of it is engine limitation, the rest is asset limitation insofar as company assets like... idk, actual texture artists.
This is true, and they'd never do something so stupid. We have yet to even have our first FedCom Civil War event after all.
tbh, I'm really interested to see this Post-Jihad continuation Catalyst had lined up. There's nothing at this point they can do to attract new people, so I wonder just how hard they can throw what's left of the franchise against the wall.
Just a warning.
Someone said that children shouldn't be playing after that 12/f/cali comment so, like.
It was obligatory, right?
Kind of sour they didn't give time stamps on the messages.
There's nothing at this point they can do to attract new people
Make a good fucking game for once.
if you get banned from teso I'm gonna remove my sides.
Poor innocent rin running rampant in teso, mods put her down
...yikes.
So would I, but I'd feel terrible because I wasted the 14$ if I got banned not a week after buying it.
What level are you so far?
25 on the Sorcerer.
We haven't even left Grenumbra left.
I've said it before and I'll say it again, the franchise has wide appeal. It's just been a complete clusterfuck through almost all of it's existence. MWO is bad. The classic tabletop is a clusterfuck you'll never get many people into. The RPG is bad. Mechcommander was okay. MW2-4 was the only part of the franchise genuinely good, and that's why they're by FAR the most well known part. Hopefully HBS Battletech turns out to have the potential I know is in there, and maybe we'll get some interest out of that. Though it'll likely be slow, just like it was with Shadowrun. Game oversaturation is a hell of a thing.
:v if it's not too expensive I might try and get it for George and I
tho I'm not even sure he likes mmos
people who dont like mmos are like pee people except in a bad way and not a good way
agreed :^)
George is a pissboy
ew dont use pee words when talking about pee
MMOs are bad
why play a one game a lot when you can play a lot of games a bit instead
you lost me
dont say p*ss
piss
Nice picture
dont say pee words at me !!!!!!!!!! AAAAAAAAAAAAAAAAAAAAA
thanks
Tera was about the only MMO I really had a blast with.
i just couldn't get invested
because everyone ran off and did their own thing.
There were times you would join our party and go afk when me and Yan fought BAMs just to sponge EXP though.
huh? i don't remember that.
I had more fun playing on my own or with Reid honestly.
And it was against the Kumas in the highlands.
I remember because whenever you went AFK on one of the rocks me and Yan would just leave party and go fight other shit.
at least you're having fun.
i couldn't fight those things on my own you know... i needed help to even be able to do anything near your guys level.....
it's not like i could magically be as strong as you guys right there and then.
u ever just sit in ur chair thinkin about boys i do
You were the same level as me and Yan.
It was only a level 20 area even. And you played a healer. Just throw heals.
oh well, what's past has passed. only thing we can do now is do better in the future.
TERA was fun for what it was
I got to end game and had a blast with a few people I ran into now and then.
Though I bailed after it just became about new classes for the loli or gender locked classes.
STRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRONG!!!!!!!!!!!!
weirdos.
ur weird
Prey was pretty good.
gimme succ
go back to talking about green beans this pee meme is bad
Libel!
Go kick some rocks, pal.
:D
Byad dawg!
sory dad i just think the word pee is fun to say and it sounds funyyy
Obligatory.
Okay, that made me laugh.
♥
so desu ne...
Let's not fight. We're ten times as annoying when we work together!
>pet
That is true, let us cream everyone
where is the food to eat
thread revive !!!!!!!!!!!!!!!
I am so sorry to inform you, but the patient has passed away..
yummmMMMMMM
:( sad to hear...
hi honey i'm home
wewwcome home would u wike some tea
i want you to make me a cup of steaming hot tea and then hold your hand underneath it as you pour it down the drain.
pls.
w-why would you want that... that sounds. painful and wasteful ;;
it's what i want
make emily post.
There's nothing we could do..
But they'll live on in our hearts!
cream everyone
but why :(
im trying to understand..
<3
no
<3
creamed beans
everyone's crop?
yummmmmm!
howdy howdy
I'm back with a vengeance
yeah
gross
work today was awful
i don't want to be an adult anymore
i want to be a housewife artiste
if you are going to post green midget post at least post incase instead of a cheap imitation
midget porn
Hurtful to little people, that would be.
goblins*
me too
I will post whatever I mouse over. This folder is too big to look for incase alone.
it is impossible for you. you are not artistetic like me. my doctor said so.
Im going to punch sabrina in the cunt
yeah i am not good at art at all i have no artistic ability and it makes me sad
That being said, maybe I should start categorizing this shit
i didn't have artistic ability until i studied and practiced it a lot
More sonic porn, pls
ive always felt that practiced and trained art is nothing compared to raw talent and so i just dont even want to try at all
I didn't hear you say pl-
oh okay
this is a misconception invented by people who want to justify their laziness.
well also if i had any real passion for art then i'd want to put forth the effort to practice
also not really a misconception i dont think
you can usually tell when an artistic creation comes from a natural place or from practice
neither is better than the other necessarily being as art is subjective but it does depend i think
as an artist, this is wrong
I thought bean wasn't an artist though
raw talent + practice > practice and no raw talent
im not i dont draw or paint or do anything visually
ive made some vids but nothing serious
i can make OKAY music though
talent
What would loco know
they are not an artist
Loco
Why does Scoots obviously want to hatefuck you this bad?
i'm a cutie boy who likes to be dominated
he finds that attractive, i suppose
Do you find him attractive?
guys the new chapter of onepunch man is looking really sick
i think that's irrelevant. i'm not looking for anything with anyone over the internet.
the loco scootsy lili love triangle
You watch chapters of manga on Youtube?
Word, I feel you.
no thats just where i found out about it and i was so fucking hype i skipped to the first part that looked good and screen capped it
you'll never get scolded and looked at like maika does
why even live
who what ?
I think he's talking about Blend S.
Have you or a loved one been diagnosed with mesothelioma ?
oh i forgot her name somehow
no havve you
You think I should just put "looking for cute girl to peg him" in my Tinder description
y*s
he is a cuck
The only name that matters is Hinata.
mafuyu top tier
sowwy shes got the cutest outfits and hair
yes
who ?
Is tea just gay coffee ?
Y E S
Is coffee just straight tea?
r u just a straight (cu)tea
No its not
What is it then??
Gay coffee
is anything rREALLY straight anymore though
straight outta compton
dicks are pretty straight when they're hard
Coffee is gay coffee?
Thats what I said
Well, I mean you're not wrong.
I'm never wrong.
"I was wrong" - Ban 2017
I'm literally never wrong kiddo
How does that quote exist if you're never wrong then, HMMMN?
Its a lie
Whenever I'm wrong I just tell the person I was arguing with that I was just being sarcastic.
"I was just joking" is good too
I was just joking about saying I was being sarcastic.
Exactly
Nice
You're nice!
Is this sarcasm or joking :c
CUUUUTEoutfit super cute.
It was just a prak dude
Sarcastic joking!
So basically being honest :3
It's just a park dude
don't copy me
Hey, I'm not sure if you noticed, but I said
It's
whereas you said
It was
As well, you said
prak
but I on the other hand said
park
Like I said
Clearly copying me
Hmmmmmmmmz
I wanna read some sad romance manga, please someone recommend me something
highschool dxd hee hee
Emma you lied.
tag urself
Take it to court and you'll be disproven!
What'cha thinking about?
Mmn, it's really short but if you haven't read Koi Kyokusei I definitely recommend that!
Other than that, maybe Orange?
Feeling okay?
Ehh? About what?
aa sowwy ;;
i dont read sad romance manga i just read fun stuff
NOT INCLUDING HIGHSCHOOL DXD
i want to make it very clear i do not ready hsdxd
Nobore!
"It's okay, about 66% of the way through it turns into a 3* map."
More like, 4* through and 7* ending.
Why wouldn't you read DXD?
Koneko is adorable and there's an otokonoko.
too lewd
I'll give Koi Kyokusei a try! Orange would have been a good suggestion but I've already read it
Mhmm, just in the mood for it for some reason. How was your day?
I bet you only read ecchi sketchy manga
what is ecchi sketchy
initial d
SUSUME! TAKAIIIII TOOOOOOOOU
What the heck? Are you sure you're not just bad?
Nothing is too lewd for me.
It's not a genre that I'm super into or anything so I'm sorry I can't be a better source of recommendations...
What the heck? I'm not sure if I've ever been in that kind of mood.
I can't complain too much, Subtle and I spend way too long trying to unsync Sohm Al Hard, he kept getting 1-shot by the final boss's tank buster though... How about yours?
everything is too lewd for me ;;
I'll never lose at court
I'm the next Shaq
Emma post lewd stuff.
please ignore this post emma
I can't tell, you're too young for that..
*m e m e s*
It's okay.. I'm happy you gave me one ^^
Also I blame listening to many weird indie bands lately
Why are you running that unsynced? For Chloe? Tell him to git gud :3
My day was okayy, work and then I went out for dinner and a balet after with a friend. They were supposed to go without someone else but that person was sick so they asked me
Oh no, I'm pretty sure I'm bad.
But I'm also pretty sure that stream drop is objectively agonizing.
Well, I guess I have to if you tell me to.
Even existing?
It's too bad we were talking about tennis!
Hopefully you end up enjoying it at least, I would feel sort of bad if you didn't.
Ehh? Weird indie bands? Post some~
B-Because I thought it would be faster than waiting for a queue!
Even healer queues take forever on old content like that...
I skipped most of that content but I still had all the blue unlock quests for them and I was just trying to clear them all out.
Oh? That sounds like a lot of fun, did you enjoy yourself?
After the two spinners it literally drops down to 3* until like 20 seconds before the end.
And even then it's just 140bpm streams. A snail could do those.
SHIT I FORGOT TO LEWD POST
Lewd post
WHAT IS.
ECCHI. SKETCHY !! ???
existing is p lewd
I don't like streams k.
Why not stop existing then? ^w^
ONE HUNDRED AND FORTY BEE PEE EMM
Even I can handle to like, 160~
1v1 dead to me ar10
I'm the best at that though
dead to me
Brb going to sightread FC it
216 matches found
Oh, this one
I'm pretty good at table tennis but I don't think I've played actual tennis~
Good luck~
I'll let you know~ also here's 2
youtube.com
youtube.com
Sounds boringg. I really don't like most dungeons, especially the old ones :x
Mhm, it was postmodern dance, so really abstract and a little weird at times but I liked it
vibrating pillow.
photoshopped pantsu
It's..........................
I can play everything
i'm the best at everything
I am everything
i dunno
ITS......................... ?
You're too curious for your own good~
LOL
Did you even make it past the first 10 seconds?
Teen Suicide
What a nice band name...
I definitely prefer the second song to the first but you were right about calling them weird at least~
Well, my only other option is farming Lakshmi non-stop until I get a doggo which isn't much better :3
Indie bands and postmodern dance, you're way too advanced.
I'm supposed to be lewd posting!
Is this not good enough?
You vs Goku, who would win?
Ecchi Sketchy One Touchy.
thats me always
if i was a cat i'd be out of lives !!!!!!
WHAT
give him bad food
When did you suddenly turn into a bitch?
Didn't FC it then I take it?
Not hard to make it until the first drop.
I have like 100 something combo.
I'll bow to you if you so much as pass it.
Did that happen ?
You won't find any rest here.
this is a good picture
Yeah.. that's a pretty weird name. I used to listen to electronic music only but this is okay I guess
Oh I loved healing that when nobody remembers to vril like ever :3
Yea I'm slowly turning in one of those art student types. Even my hair got that messy look lately.. It's just a phase, I promise!
Maybe you're already dead and you just don't realise yet...
I failed at the second drop 3/4th of the way through it.
There were two streams back to back right before it and I fucked them up really bad and I couldn't recover~
I was at something like 450 combo so I guess I win :^)
Maybe you were always a bitch.
Why wouldn't you 1v1 him fair and square?
He just got outplayed
nothing more
thats entirely possible ! wherever i'm at right now its not too bad. whether or not im alive.
cant wait to outplay mousey
I can be full laydown and shitpost at the same time...
It is indeed.
Oh well, grats, you're the best.
It's a pretty fun map.
That's a good attitude /pet
But shitposting isn't healthy.
I really should expand my music tastes more, I used to be heavy into rock and metal but then I sort of did a 180 and now I listen to pretty much just anison and denpa.
One of my runs of it the healer was getting really angry at people not using vril so they made a macro to ping when to vril~
Wow, don't let them corrupt you too badly!
M-My hair always has that messy look because hats and wind are a thing :3
b4ll3r
I mean if you just want mindless jumps play KN33SoXXX or Soukai Rock.
how are you feelin this eveNNINg ? morning ? whatever time it is in slovakliaialaland
yeah it is thank u for reaffirming this
what are you up to thise time of these day
still searchin for that sad romanc e ?
Make me healthy.
I'm waiting for it to happen
I told you so
That kinda suprises me, you really don't seem like the type to be into rock and heavy metal. Sounds like you went full weeb though :3
I just let them die.. people won't learn without consequences!
I'll try..! Messy hair is cute though, especially if it's messy because someone ruffled it ^^
nah, i didn't say that. I'm adding to it.
I'm being a lazy and stressed out student most of the time... I wonder if there is a connection between those 2..
And nyo, I have something to read now
Chicken soup and shuteye.
Nuu I want rhythmic maps.
Good song that isn't hellishly out of sync, regardless of how difficult, I don't like routine because it's jumpy but because it's well done.
Even that casual S3rl map we played forever ago is pretty good, even if the song itself is subpar, because mapping is on point.
Idk
I like Blue Dragon I guess.
Eh,
1:22 am so, night or morning, pick one.
It's pretty well, I was hoping to play eso with certain someone but they aren't feeling too well so I'm just working on my osu resolution. only a couple of weeks left and I still can't get over 95% acc on flower dance or close call awiofhbnqaowgb fuck
That's a new one.
how would i go about it
KN33SoXXX IS A BANGEEEERRRR
there might be a connection ? but maybe theres not. maybe the lazy helps make the stress less or sometimes more idk i dont ENTIRELY know u to be able to give this kind of INSIGHT
and o watcha reeeadin ?
i say if youre up till 12am+ its night until sunrise
but if you wake up at like 4 earliest thats morning
whats your osu! account link ? postpostpost
I'm pretty sure there is a connection... but it's okay because I'm still passing stuff ^^
I started reading the manga Wishy recommended me
Its not doable
osu.ppy.sh
It's pretty trash, but I mostly play multi.
I just need a day or two of rest, tbh.
Too much shoveling has me feeling like noodle.
tl: healthy means pregnant.
WHAT IS THE TITLE !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
its literally the easiest thing ever
atleast ur not a washed up osu! who used to be 60k x.x
osu.ppy.sh
i like
can play but i cant ever move up the ladder at all for some reason
idk why its annoying
im gonna sift thru ur hi scores ^^
almost all of my highs are from 2 years ago
They're playing basketball and you're playing space soccer.
The majority of my friend group around grade 7 was into like, P!atD, Three Days Grace and Bullet for my Valentine, so I sort of listened and grew into it from there.
Even then I was still sort of a "bitch" and had a pretty effeminate personality despite my music taste :3
Then sometime after that I ended up rewatching Cardcaptors and then watched K-On! and Kanon and to go full degen weeb~
That's better than spamming vote kicks I suppose! I bet you have both doggos already, right?
It doesn't help that I like to keep my hair a bit longer so it's easy to get messed up. I've improved it a lot from being a metal head back in high school though~
You don't hafta narrate their stories.
Mazare Party just got recently ranked~
Right?
ewie
i wanna be kn33soxxx
kyoot little 6 digits
mashiro kn33soxxx?
Should probably get some vitamin water with magnesium if you have sore muscles.
who what
4d chess
I can't understand your weird language
Right now?
I could probably get higher but I have different goals atm.
Idk, subby is 60k and I can beat him on almost anything, my consistency is just utter trash because mouse and 0 stamina and I also suck at long streams.
You should play more.
Suppose I could try that.
RANK IS JUST A NUMBER I CAN JUMP 6* REEEEEEE
It means like, pure white.
More like 4 d's in your butt xd
That costs money and involves walking 5 miles to the freaking store through 5 feet of snow up hill in both directions through driving hail.
Miss me with that.
Get someone to go on a journey for you.
I'M 6 DIGIT BUT I CAN ALMOST PASS THIS 7* MAP WITH 70% ACCURACY GUYS, I'M BASICALLY COOKIEZI
subby is 60k and I can beat him on almost anything
You wish lol
And you say that you don't lurk all the time.
ur weird
i used to play way too much
i do better with tablet but after about an hour my wrist hurts and my palm gets nasty
but mouse is like half a star less that i can do
i play osu!mania a bit more these days
oh okaaay.
i believe u can do it
I just licked a switch cart for the memetic value and immediately regretted my life choices.
Why not?
The quest to find the holy Vitamin Water.
H E L L O
E V E R Y O N E
Not all the time, just when I'm awake~
OKAY I'VE BEEN WANTING TO LICK THE SWITCH CART EVER SINCE I BOUGHT MAWIO SHOULD I DO IT SHOULD I
Wish told me anyways
1v1 me right now nig
This is straight up bullying.
I don't think I ever even tried that.
Maybe like, once.
B-but osu..
Koi Kyokusei
You could have scrolled up you lazybutt..
I only know Bullet for my Valentine of those three. One of my friends was into that kind of music but I never really liked it too much.
That's pretty cute~ I was in love with a slightly alternative girl in high school but I never confessed because I didn't want my friends to bully me ;;
Nyo I only use the savage mount to show how good I am at the game [spoiler]just kidding,
bubble bird is my first and only primal mount [/spoiler]
Now I wonder what you looked like in high school :3
PoE
Exactly!
For some dumb reason I thought about the holy hand grenade of Antioch.
So.. all the time?
Stupid spoilers.. what did I do wrong
Yeah that's what I thought.
It's a really stinging bitter taste.
It doesn't really last long though, so, like.
Up to you?
Wanna go on a quest to get me the holy Vitamin Water?
The Holy Hand Grenade of Akatosh
Double asterisks here.
:^)
Thats pretty gay
This coming from you.
Swing and a miss
osu!mania is just stepmania with maps made by osu! community
yeah but mmmmmmm you just said it so we're good
if i do it i'm recording it and posting as a webm hee hee
I had to look that up, I thought I butchered the original name.
Grrrr
I won't say it next time hmph
At least I can say I tried. Better than nothing, some might say.
If you do that stupid tee hee bitter gross overreaction crap everyone does I'm not going to talk to you for the rest of the year.
I was legitimately let down by its bitterness because of how much people played it up.
No, I just out-memed you.
I'm a werewoof in TESO now.
Just because SD is addicted to them doesn't mean they actually help.
I know.
I never played stepmania either though.
Have this fancy guide I saved back in "ye olde" days of here.
Legit just link any map you have a better score on and I'll beat it next time I play
Like I do every time
Actually, they used to be my primary fluid intake.
Because I don't really eat animal products I tend to be low on that vitamin group.
Freaking pervo.
I mean, am I wrong? :^)
Oh wow, usually people recognize Panic! at the Disco before them. As long as it has a catchy beat I usually end up enjoying it though.
What the heck? Completely untrue. Your friends would bully you for that? My friends were always so encouraging of relationships~
Pfft, how much Gil do I have to pay you for your static to carry me through it~? White lanner is a pretty good mount though to be fair. One day I'll farm enough to get the firebirb.
Mmn, maybe one day you'll find out. Even then there are only two people here who know my IRL name so my looks are a little bit far away from that :3
I'm pretty gay.
r u claiming im weird
dont do that >.> looks like a nice manga and also kinda short ?
$$im going to vomit live on camera now just because you said this brb$$
Oh, I don't know too much about Elder Scrolls, kind of started playing Skyrim a while ago and I ran out of motivation with it due to whatever things that came along.
Bah, they do magics.
I mean, the other good thing about the magnesium one is that it prevents hangovers.
As long as you drink it before passing out.
PLEASE DO NOT LEWD THE TENKA
wow lolicon
Oh, apparently they removed some and added some?
Well start eating animal products.
YES.
It's meh.
Are you one of those that must absolutely have a hangover after drinking?
Oh, right.
It's not almost passing, it's no where near.
Milk makes me gassy and meat is a pain to cook into a meal.
zoop 👉😎👉
Actually wait.
Fine.
I'll go get a glass of chocolate milk right this second.
for reference if you're not sup
I'll FC it tomorrow so you don't have it too easy though~
yes
How many dicks have you sucked ?
if its under 3 thats not even gay
choccy milkies
Hey man, shit on my streaming, my consistency, my endurance and my speed all you want, but I can take 6* jumps.
boop zoop.
Well, zero.
Want to make it 1?
Don't lewd the Tenka, but lewd everything else
Thankies, saved!
The name rings a bell but that's all, I wouldn't be able to tell it's rock or something ^^''
Mm, maybe they wouldn't have bullied me but I didn't want to take the risk I guess. Soo dumb though
puu.sh
White birb is best birb~
Two is not many ^^
It's always fun to see what people looked like when they were younger, especially if they went trough silly phases
Mhm, short is okay sometimes though
Especially because I should sleep soonish
What about eggs ?
See
Checked the settings for the new codes.
oh for sure yeah ! i like shorter manga more often bc it means i have less to buy x.x
Yeah type
*sucks dick*
that will really get me going
Eggs are good but they're a brekbrek food so it's weird having them not at brekbrek.
Okay brb gonna get that miruku now.
I don't really get hangovers much tbh.
brekbrek better than never at all.
I have like 12 manga but I only bought 2, the rest are all presentos
Then you're not drinking enough.
So I take it you already have an FC on this then?
osu.ppy.sh
Pretty much this to be honest :3
I think I'd classify it as emo rock~
Baka Angel, you should be able to date whoever you want to without being bullied for it.
W-Wow, those are some pretty steep prices...
Why is susex mount so much more expensive than everything else~?
Flying doggo is better than birb though sorry :3
Well, Subtle is an asshole and managed to dox me once I told him I was a boy, then I told Dardar because I felt bad for always being such a pain in his side.
I was looking through old camera images on my Dropbox yesterday and I was dying at some of the haircuts I had that I thought looked good ;;
*ducks sicks*
What the heck?
WOW
THAT WAS AMAZING !
1v1 me straight rum
Oh, I still haven't even played it.
Kinda slipped my mind, brb.
I tend to skip brekbrek.
Don't tell Erin though, she'll yell at me.
Not that Erin.
Nani sore?
im the opposite x.x
i buy manga for buddies and then also for myself and i've been gifted like 3 mangies
who doesnt skip brekbrek tbh
I'd rather go with vodka.
Zipped.
Hey, thanks dude!
We can multi it if you figure out my main :^)
Sore nani?
All the same to me!
I did afterwards. Altough it took confidence a couple times ^^
Yuh it's mostly rich arabs buying on EU. I think I made like 120m~ in total from sales :3
I think the Susano price is a typo, 650m is a little much..
Ahh~ I think the only people that know my name here stopped posting
Did you dye your hair or did you just get bad haircuts?
They make good presents, too bad my friends dont like manga or animu ;w;
Yeah no problem dude
Yknow I just thought about how you said "swing and a miss"
Why you gotta through salt in a wound, honestly
That is probably the second laziest mapping I have ever seen.
Subtle is an asshole
A quite nice opinion!
What's a Tenka?
who doesnt like mango/anims ?? (lamers thats who)
Good morning.
NYANPASSSUUUUU !!!!!! hewwo how are u
And it's all RMT money...
I'm incredibly jealous. Come server transfer to Faerie and decorate my house with all the money~
Lakshmi has the cuter mount anyways.
Would you tell me your name? :3
I got highlights once but it was usually just really bad cuts.
I used to think bangs were a sin so I would have really long hair and have it parted right down the middle ;;
Dude
What's the laziest? Square jump training? :^)
Maji yabakune
That image has pretty nice... opinions...
Adachi Tenka
You're a perv.
Kokoro ga itai
I feel pretty tired and numb today, hbu?
That was a good anime.
I don't!
Pretty terrible image tbh, deleted it.
Bro
Hello
kinda painful ? but im keeping myself distracted so its not too bad. mostly head/neck pain.
hope u feel less tired n numb soon ;;
my point still STANDS !!!!!!!!!!
12番足立甜花、よろしくお願いします!
Atama mawaru
Guru guru
Wow, so perverted.
The new episode was bad tbh.
Wow, so rude...
Bruh...
Echo just play this map, it's literally free pp
That doesn't answer my question though
What's up dude
::scared::
Hidoi
You're calling me a lamer?
Here's a loli to make up for it
Normies...
Yuh probably, I don't really mind though. Gil is gil ^^
Ahh tempting, I love decorating. It's so much fun to make a house cozy even though the options are kinda limited in ffxiv
I wouldn't mind, but not now because I'm going to bed :3
That sounds pretty bad, I feel sorry for past you
Hey there, Emily.
How are you?
You too.
Surpriiiise.
Not much, Hows the Americas?
This is gonna kill me isn't it?
sleep well!
sleepy, why's tired and numb?
Sleep well Angel.
Reasons and stuff.
Do much this weekend?
Hell if I know. I don't like shit, I don't go outside. An Album by Earl Sweatshirt
I'm just sitting here playing Hollow Knight
I AINT CALLIN U A COOLER
nOOOOOOOOOORMIIIIIIIIIIIIESSSSSSSSS
weird friends... so judgemental from what i can tell from your posts x.x
also nini have a nice sleepy rest <3
what is your plans for the day ?
I'm not scary! I promise!
You're hidoi!
Cute!
Lewd!
Mega lewd!
You guys couldn't get in trouble for it at all, could you?
Mhmm, even glamour-wise I wish there were just a lot more customization options available...
That and good anti-aliasing for once :^)
Wow, how embarrassing. Sweet dreams :3
It was the worst ;;
Hm?
Wow, what a map.
No, it's actually free pp.
just some programming hbu
But you watch lewd shows and stuff.
talk c++ with me
Ikr
Best design desu
I'd look for you but you'd probably get mad and ignore me or call me a creep like you did Subtle if I found you.
like you did Subtle
lmao
How come?
Besides the dox :3
std::any vectors are nice
Is that any good?
Fix some computer stuff, organise some things around my room then probably go out at night.
I have to revert my registry back to default so that I can update windows and it'll actually install correctly then set it back up without breaking anything. Since I'm using an early build of 10, I figured it's time for an update. Then after that I need to run that bloat killer script and anti spy etc.
You don't?
I mean, you wouldn't be able to find me.
It was kind of bullshit the way he did.
Because you exist!
std::variant is way cooler though.
WHAT?
Is gud mane. Browsing for some fresh stuff rite now.
There were some :ok_hand: top quality lewds on my pixiv dashboard just now.
i just never update :3
sounds like hard work
i couldn't sit through an episode of that.
is std::variant just a speedy boi version of std::any?
Yeah it's super good.
Made by your people at that.
Honestly better than Cuphead and it's in the same wheelhouse as far as hand drawn art direction but it didn't get nearly as much buzz/recognition.
I probably could.
Thanks to him, actually, I may have already, but I'd rather not get labeled a creep on top of shitty 6 rank and whatever else I am already.
and whatever else I am already
A great person?
I never do too but after a while I probably should.
I don't even play too many games these days.
SD pls.
go out and do ?
(ur a big stinky lamer)
dont post lewd
uwuguu
Well, it's stack allocated and supports compile-time checked visitation.
So yeah, but it only supports a user-specified typeset.
Unlucky.
I really gotta get into that.
How's Giggles?
Yes?
Is that a lack of funds or interest or what
You got me into it in the first place.
That's not a label I've been given before.
security is for chumps anyway
Heck you're smart. I'm looking to do something like
arg.type() == typeid(int) now with arg being an std::any but can variant do this?
Oh, well, now you have been.
What'cha gonna do about it?
assert(std::holds_alternative<int>(arg));
ewwww
Disregard it.
Nope, I will cherish this moment for an eternity.
Tomatsuki's gonna get jealous.
I won't post the really lewd ones~
But what did he do to give it away?
You would do
std::holds_alternative<int>(arg)
Yeah, pretty unlucky.
Hey Sd, I'm alright. Kinda tired. ty btw for those game links.
The latter.
Who you callin' a chump, chump?
For cherishing a moment of friendship?
Oh?
Long days?
No problem, I'll try to link them as long as I find out about such things.
thats not too hard to make happen
Funny, I just got that one too
Good to know we have good taste in lewd
I'm off to bed. Good night !
actually is same
thank you!
Sweet dreams L00dz.
What DO you be DOING then
Nothing intentional, he just sucks at being subtle
that is a maybe though, I am not entirely sure it's actually you
And I actually meant you got me into Osu, not stalking you, that was just to see how hard it could be, I'm stopping here.
She gets jealous over everything. See below.
Exactly, you should tell SD.
Wow.
Night night.
Is same?
There's a .type() ?
Who is this artist and does he draw Tristana? I cannot be bothered to fix reverse search.
I mean, you could literally just check my osu profile for that play I posted.
HEY SD its easy to make tomatsuki jealous
The artist's name is in the corner of the image dumb dumb
mimicp.tumblr.com
Oh, I'm dumb, I was just narrowing it down to global rank and country.
Is a bunch of nonsense.
Yes, good.
I find it funny, because that's what Desu was like to Subtle.
This has been noted.
Accidentally found Nymph too while I was at it.
hey sd do u think youd want to be a pony boy ?
Was Desu getting jealous over Subtle ?
I kinda missed that whole phase.
I didn't know the artist's name either, I just have the image.
How come I can read the name but you couldn't?
A what?
Oh no, I mean
if (std::holds_alternative<int>(arg))
works just as well as
if (arg.type() == typeid(int))
Thanks again :3
Off to bed now, night night.
a pony boy
Ehh? What?
Well, yeah they probably do the same std::visit on decay_t internally anyways.
Night night.
I did not open it to check the name because sometimes they use their DUMBASS signatures that are illegible.
Eh, nothing, never mind.
Because Rin was playing around with Subtle and Desu got absolutely booty-bothered about it.
Sleep well iktje.
Ah, for some reason all I could think of was Donkeyboy.
youtube.com
def not as good as ponyboy IM O
Oh yeah, she told me they were doing it on purpose to piss him off.
You were literally just browsing Canada country ranks? :^)
I mean, that's fair, but you still shoulda checked what if I was too retarded to read?
Then we would have ourselves a real pickle rick
No.
I would be sitting here until next week if I did that.
Yeah.. Donkeyboy's seen better days when he was more relevant, at least in the Nordic countries anyhow.
It was pretty funny.
Kind of happy he's gone.
A what now?
So you going to message me? :^)
whats life like in nordic country
Really?
You've never heard of pickle rick?
Nu.
I'd rather be wrong.
You know, I hear that a lot..
I hate that show.
Because of the fanbase or the show itself?
Work pays well, but everything is awfully pricey.
I think the biggest upside is that wherever I travel the place will be quite cheap to me.
Otherwise the problem is that during summer can't sleep well because there's too much light outside around the clock and during winter there's about 6 hours of sunlight..
For a reason too!
The show itself.
What even?
do you think youll leave at some point of your life or can you see yourself staying there til ldeath ?
Probably will stay here, I don't like paperwork and moving somewhere else just screams piles of paperwork.
I bet the screaming is what gets you
yeah thats my biggest issue for me
just soooo much work to do
YES
Physical work is never an issue, but once they drop a paper in front of me, I just raise a finger and go all like "What do I fill here?"
I'd rather confirm it's you by completing my resolution than creeping on you.
I've been playing maps way above my level for the past month, I don't want it to be for nothing, if I fail, I'll message you.
C'mmon, how bad could it possibly be..
MORTY [BURPS] WE GOTTA GO ON AN ADVENTURE MORTY! SEASON 3 [BURPS] BIIIIIIIITCH!
I actually enjoy the show, though.
Huh, what would be?
I mean you can still confirm you know without me knowing too :^)
Desu
yeeeeep same here
it just hurts my head
I guess.
Please no.
There's an old saying here that pretty much loosely translates to avoid something like cancer, and I think that would fit the moment.
We should have a ban for all bureaucracy.
You're making me regret missing it.
how do we get this band
It's not my fault you weren't around for the time.
Bandcamp, duh.
good call.
theworstlabel.bandcamp.com
It even hurts to read.
Yeeeeeah...oh well. I'm sure worse posters will pop up eventually.
Sinni?
He has popped around twice after it all.
JEEZ RICK
Stop being a baby
Enough.
whoops idk why my trip disappeared
Lloyd's IQ is just too damn high
Ah, thought it was Sinni because of K.K. Slider.
youtube.com
das not ebin kk slider !!
I'll be the judge of that
Thankfully my rent is fairly affordable
Neru finally watched the whole thing after shitting on it ever since she heard about it.
Judge wisely.
I know.
But the picture was!
What happens if you don't like my judgment?
Throw a tantrum and drink heavily.
I can deal with this. Just don't break the vases. What will you drink
WOFFEE
The act of going to a venue which can be used for flexible working. Derived from wi-fi, work and coffee - woffee.
- Urban Dictionary
Neat
Oh, it had a meaning?
I wrote the entry just now.
You son of a bitch.
it wasnt !!!!!!!!
It's still this thread?
Now you have to go to work during your tantrum :3
Rick and Morty fans are really annoying.
New
New
New
I cannot do it, I am afraid. *cough cough* I am sick.
|
https://hollaforums.com/thread/1935979/anime/-.html
|
CC-MAIN-2019-13
|
refinedweb
| 7,701
| 83.15
|
#include <ident.h> High-level calls IDENT *ident_lookup(int fd, int timeout) char *ident_id(int fd, int timeout) void ident_free(IDENT *id) Low-level calls id_t *id_open(laddr, faddr, timeout) struct in_addr *laddr, *faddr; struct timeval *timeout; int id_close(id) id_t *id; id_query(id, lport, fport, timeout) id_t *id; int lport, fport; struct timeval *timeout; int id_parse(id, timeout, lport, fport, identifier, id_t *id; struct timeval *timeout; int *lport, *fport; char **identifier, **opsys, **charset; int id_fileno(id) id_t *id;
ident_lookup tries to connect to a remote IDENT server to establish the identity of the peer connected on fd, which should be a socket file descriptor. timeout is the longest permissible time to block waiting for an answer, and is given in seconds. A value of 0 (zero) means wait indefinitely (which in the most extreme case will normally be until the underlying network times out). ident_lookup returns a pointer to an IDENT struct, which has the following contents:
typedef struct { int lport; /* Local port */ int fport; /* Far (remote) port */ char *identifier; /* Normally user name */ char *opsys; /* OS */ char *charset; /* Charset (what did you expect?) */ } IDENT;
For a full description of the different fields, refer to RFC-1413.
All data returned by ident_lookup (including the IDENT struct) points to malloc'd data, which can be freed with a call to ident_free. ident_lookup returns 0 on error or timeout. Presently, this should normally be taken to mean that the remote site is not running an IDENT server, but it might naturally be caused by other network related problems as well. Note that all fields of the IDENT struct need not necessarily be set.
ident_id takes the same parameters as ident_lookup but only returns a pointer to a malloc'd area containing the identifier string, which is probably the most wanted data from the IDENT query.
ident_free frees all data areas associated with the IDENT struct pointed to by id, including the struct itself.
The low-level calls can be used when greater flexibility is needed. For example, if non-blocking I/O is needed, or multiple queries to the same host are to be made.
id_open opens a connection to the remote IDENT server referred to by faddr. The timeout is specified by timeout. A null-pointer means wait indefinitely, while a pointer to a zero-valued timeval struct sets non-blocking I/O, in the same way as for select(2). id_open returns a pointer to an id_t datum, which is an opaque structure to be used as future reference to the opened connection. When using non-blocking I/O it might however be useful to access the underlying socket file descriptior, which can be gotten at through the id_fileno macro described below.
id_close closes the connection opened with id_open and frees all data associated with id.
id_query sends off a query to a remote IDENT server. lport and fport are sent to the server to identify the connection for which identification is needed. timeout is given as for id_open. If successful, id_query returns the number of bytes sent to the remote server. If not, -1 is returned and errno is set.
id_parse parses the reply to a query sent off by id_query and returns information to the locations pointed to by lport, fport, identifier, opsys and charset. For string data (identifier, opsys and charset) pointers to malloc'd space are returned.
id_parse returns:
For all errors, errno is set as appropriate.
id_fileno is a macro that takes an id_t handle and returns the actual socket file descriptor used for the connection to the remote server.
Here's an example how to handle the reply from id_reply() in the case that non-blocking I/O is set. Note that id_reply() will return 0 as long as it's not finished parsing a reply.
int rcode; ... idp = id_open(...) ... while ((rcode = id_parse(idp, timeout, &lport, &fport, &id, &op, &cs)) == 0) ; if (rcode < 0) { if (errno == ETIMEDOUT) foo(); /* Lookup timed out */ else bar(); /* Fatal error */ } else if (rcode == 1) { /* Valid USERID protocol reply */ } else if (rcode == 2) { /* Protocol ERROR reply */ }
|
http://www.makelinux.net/man/3/I/ident_id
|
CC-MAIN-2016-44
|
refinedweb
| 676
| 58.82
|
18 March 2011 01:51 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The 10-20 days of turnaround was brought forward from its initial plan in early May after “a thorough consideration” in view of record high feedstock paraxylene (PX) prices, the source said.
Its PTA capacity is expected to be increased to 2m tonnes/year from 1.5m tonne/year after the completion of the turnaround, he added.
Yisheng Dahua Petrochemical is a joint venture between major PTA producer Yisheng Petrochemical and the Dahua Group. The former operates two PTA lines with a total production capacity of 1.1m tonnes/year
|
http://www.icis.com/Articles/2011/03/18/9445006/chinas-yisheng-dahua-to-shut-pta-for-turnaround-debottlenecking.html
|
CC-MAIN-2015-22
|
refinedweb
| 102
| 64.2
|
Agenda
See also: IRC log
<fsasaki>
<fsasaki> scribe: fsasaki
jirka: used source code of the
validator.nu
... currently best validator for html5
... validation engine is used by w3c too
... modified HTML5 to have ITS attributes in the validation preset
... will submit my code to validator.nu, so in the future validation with ITS will also be possible in W3C validator
... only issue: our working draft is not very stable yet
... we can postpone the addition to validator.nu until tpac meeting
... if you want to try yourself, I have posted links for offline-validaton
<declan> I think this is the github link:
jirka: will be quite smooth to
get our schemas part of w3c validator
... not so sure about validator.nu, but support by w3c validator will be sufficient
... another tool: created very simple tool that can take input HTML5 and produces XHTML
... that is, local ITS markup in XML
... useful if you have a toolchain that processes XML content
... not sure if it's very useful, haven't got a lot of feedback from implementors
... related question, also on agenda:
... currently our specification is missing a definition of local markup for XHTML
... for the past we used XML namespaces
... ITS markup was in ITS namespace
... in HTML5 we use its-* attributes
... so it makes more sense to use same prefixed attributes in XHTML
... I posted that issue on the ML
... outcome of discussion needs to be reflected in ITS 2.0 spec, that is: guidance how to use ITS local markup in XHTML
shaun: even with XHTML you would
be able to use its-*
... instead of a namespace attribute
... XML syntax would still be OK
... which one is more useful may depend on the tool
... a tool that doesn't know about XHTML, you would want to transform those things into local markup
... using the HTML like markup might be more friendly to people doing HTML stuff
jirka: true, but we can do both
approaches, but should have some guidance
... e.g. if you use XHTML internally, in a workflow, you can use the ITS namespace
... if it is a public web site, prefixed attributes are better
... in this way you have consistent parsing even with HTML parser
... so both approaches are OK, but we would need some explanation in the spec
... there is "polyglott markup", that is: intersection of XHTML and HTML syntax
... if you use that syntax, you get the same parsing result with both HTML and XHTML parser
... if you want to use a document in this subset, you need to use its-* attributes
... ITS namespaced attributes are not allowed in polyglot
... I don't have a preference, but we should mention this in the spec
shaun: sounds good
pedro: for us it is very important to have an option for XHTML
<scribe> scribe: jirka
<scribe> ACTION: Jirka to draft section about using its-* and its: in XHTML [recorded in]
<trackbot> Created ACTION-220 - Draft section about using its-* and its: in XHTML [on Jirka Kosek - due 2012-09-27].
felix: schema refactoring will wait after prague meeting, so spec is more stable
<fsasaki> scribe: fsasaki
jirka: one problems of HTML5 is
lack of extensibility
... in XML you can easily use namespaces
... in spring we had decide which approach to use in HTML5
... we decided for its-* attributes
... there was some support for this approach in HTML5 spec
... I wasn't 100% sure that if we use such attributes it will be well received by HTML community
... there are not so many specs that add something to HTML5
... yesterday chairs of HTML WG introduced a new plan
... goal is to speed up HTML5 recommendation
... one approach is: split some problematic and other parts into extension specifications
... at the same time they say that the validator will support such extensions
... so validator (= w3c validator) will support also elements and attributes from widely supported extensions
... that is good for us because it is common to have such extensions
... also, mike smith from w3c, said to felix that he has nothing against adding support HTML5+ITS2.0 once we have schemas ready
... I'd just like to postpone this until all data categorie are settled
... so for us it is good
... there was some discussion on HTML mailing list
... saying extensions are normal for HTML5, and having a spec HTML5+ITS is OK thing
... and validator can support that out of the box
<scribe> scribe: jirka
felix: we still have plan to meet
with HTML WG at TPAC
... Felix is in touch with HTML WG chairs on date/time of joint meeting
Pedro: We are trying to move clients to HTML5 but we want to prevent compat. problems
Felix: Now you can use either XHTML or HTML with ITS, depending on what's more approapriate for your client
<fsasaki>
Felix: Dave developed test suite wiki page
<fsasaki>
<fsasaki>
Felix: we need to get
implementors in Prague in order to fill above table
... feedback from W3C about test suite design - it's OK
... only tests shouldn't be so artificial
<fsasaki>
<fsasaki>
Felix: we need at least two outputs for each data category
Declan: Output is not useful itself, it's just sort of metadata?
Felix: Yes, we changed this so it
is easy to compare output from various tools.
... you should do also test of real life usage
<fsasaki>
Felix: above is template for gathering basic information about implementations
see thread
Yves: There is no formal definition of overriding in ITS 1.0/2.0. But there is something in tests for ITS 1.0.
<fsasaki> comment from felix: tests = ITS 1.0 tests
Yves: so intent is here in tests for ITS 1.0, we should be backward compatible with ITS 1.0
Olaf: Overriding non-existing value seems quite complicated. I prefer to override only existing values.
Yves: Yes, this would be more intuitive, but it is not compatible with ITS 1.0.
Olaf: Can we change this? It seems that noone implementd this anyway.
Yves: Overriding can get very complex if we start to use pointers and references as well.
Olaf: There must be way how to fix past mistakes.
Felix: We need to discuss this in
Prague.
... in the scope of affected categories
Tadej: Example - one document
annotated by two tools.
... is this legal, can both tools annotate the same element/attribute?
... we can allow only global annotations
Olaf: We can consider additional
global parameter...
... (sorry I haven't catched it)
<fsasaki>
Declan: agrees with Olaf
<fsasaki> look at the lq-issue in above mail
felix: describes problem - is inherited global value of data category if only part of it was locally overriden
Olaf: If you set set of parameters globally, you should be able to override part of it only
Felix: We can have document level annotations
<fsasaki> Jirka thinks that we need terminology here
Declan: that would work for mtConfidence
<fsasaki> Jirka we don't have other parameters then its:param in ITS
<fsasaki> Jirka What about compound data category
Tadej: that would work for me as well
<scribe> ACTION: felix to Summarize discussion about overriding [recorded in]
<trackbot> Created ACTION-221 - Summarize discussion about overriding [on Felix Sasaki - due 2012-09-27].
Meeting adjourned
|
http://www.w3.org/2012/09/20-mlw-lt-minutes.html
|
CC-MAIN-2015-11
|
refinedweb
| 1,202
| 65.42
|
.
There are many interesting articles, so this one will concentrate only on parts of the Lucene project. In particular, we will concentrate on the indexing of documents, which is the heart of Lucene. Each document has to be indexed in advance to be searched. Let's walk through this process very quickly. Lucene analyzes input and creates Document objects. This is a composition of many Field objects, each composed of name-value pairs. You may think about these as "properties" associated with the document. When you want to find a document, all you need is a set of name-value pairs to be used as search criteria. Lucene will find all documents that meet this criteria.
Let's look at an example. Say we have an address book application. Each entry is represented with the
Friend class, as shown in Figure 3.
Figure 3. Friend class
Here are a few example search queries you might want to run. All are written in query language supported by Lucene QueryParser:
We assume all entries in the address book application are stored in a database accessed with Hibernate. This is a typical example, used in plenty of real-world applications. All we need is some code between the search form and the Hibernate API that can understand the query language and produce Hibernate criteria. Figure 4 shows what we want to achieve.
Figure 4. Solution overview
For each query string entered by the user, we invoke
QueryParser to build an object representation. Next
QueryInterpreter walks through the object tree and uses
HibernateQueryBuilder to create the Hibernate criteria. Hibernate will then execute the query and return the result. Simple, isn't it? We are only missing the amber parts shown on the diagram.
Lucene parses the query string and produces an object representation; this is the input for
QueryInterpreter. Object representation is a composition made from types extending Lucene's Query class. Each reflects part of the query string. The following table shows selected query types. Some represent restrictions; others are logical conditions.
For example, the parser reads "name:John" and creates
TermQuery. Now you know everything that is necessary to start implementing
QueryInterpreter.
First, we create a template method to understand what information the
TermQuery type represents:
import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; class QueryInterpreter { void parse(Query query) { if (query instanceof TermQuery) { Term term = query.getTerm(); String fieldName = term.field(); String fieldText = term.text(); System.out.println("TermQuery ["+fieldName+":"+fieldText+"]"); } else { throw new IllegalArgumentException("Unsupported Query type [" + query.getClass() + "]"); } } }
As you see,
TermQuery consists of the name field and the expected value. In the preceding code, the
QueryInterpreter is able to recognize only simple queries, like "name:John." More complicated restrictions are not supported in the example, but of course it could be extended to support more complex forms. However, that is beyond the scope of this article.
Our next step is to invoke
QueryBuilder. We start with an interface:
import org.apache.lucene.search.TermQuery; public interface IQueryBuilder { void termQuery(TermQuery query); } import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; public class QueryInterpreter { private final IQueryBuilder queryBuilder; public QueryInterpreter(IQueryBuilder queryBuilder) { this.queryBuilder = queryBuilder; } private void interpret(TermQuery query) { this.queryBuilder.termQuery(query); } public void parse(Query query) { if (query instanceof TermQuery) { System.out.println("TermQuery"); interpret((TermQuery) query); } else { throw new IllegalArgumentException("Unsupported Query type [" + query.getClass() + "]"); } } }
For every
TermQuery occurrence,
QueryInterpreter invokes
IQueryBuilder.termQuery method. This is somehow similar to how SAX parser works with XML input events.
Now we can develop the query builder. As mentioned before, this example is only using
TermQuery for the sake of a manageable example--it is very easy to translate it into the Hibernate criteria..
|
http://www.onjava.com/lpt/a/7037
|
CC-MAIN-2015-22
|
refinedweb
| 623
| 51.75
|
Quote:Any idea where I could find the latest version ?
github: xbmc/repo-scripts gotham branch
scott s.
.
Quote:Any idea where I could find the latest version ?
class MyPlayer(xbmc.Player):
def __init__(self, *args):
pokus = xbmc.getInfoLabel('ListItem.DBID ')
xbmc.getInfoLabel('Window(Home).Property(RecentEpisode.1.Title)')
title
VideoPlayer.Title
(2018-04-06, 19:10)WildPhydeaux Wrote: Is there a more recent support thread than this one marked as [Frodo]?
What I'm wondering is if any progress has been made to allow recognition of video nodes with this widget set.
I have set up a custom node to include only documentary films, which KODI sees as movies and I would like this widget to only return results from that node. So effectively showing only new documentaries instead of all new movies.
Cheers and thanks for such a great widget set. I use it with the Xonfluence skin.
Robert
(2018-04-08, 09:18)jurialmunkey Wrote: You should be able to do pretty much everything that you want with a smart playlist.
|
https://forum.kodi.tv/showthread.php?mode=linear&tid=142389&pid=2922250
|
CC-MAIN-2021-21
|
refinedweb
| 174
| 67.45
|
, as many of you have heard, SAMBA 4 was finally released… and holy crap, it’s the closest LDAP service I’ve ever seen to the real Active Directory. As well it should be too, I mean, Microsoft actually helped work on it! This release of SAMBA is huge. It’s really going to change the game of LDAP, file sharing between Linux/Unix and Windows, and authentication. You can read the news release from the SAMBA team HERE or visit their website HERE
What is really huge about it all is that you can setup a SAMBA 4 server to take over, literally, all functions of a Windows AD Domain Controller. It can process authentication requests, hand out Group Policies, process MSRPC communications and more. Think about if you could replace most of your Windows AD DCs with free software. How much will that save you in cost?
So naturally, I’m in the process of getting it up and running. I figure I just got my home systems to authenticate to Active Directory, why not replace one of the Domain Controllers with a SAMBA Domain Controller? So, I’m basing this on a Debian 6 machine. I figure that’s the best place I can put it since I plan on it being around for a while. Why do I plan on it being around for a while? Because rebuilding AD from scratch sucks! And with Debian being a rolling-release operating system, I’ll never have to reinstall the OS on the next release! Pretty damn convenient if you ask me.
So I downloaded the small .iso installer file from Debian
After getting the OS up and running, I had some house keeping to do:
Then I was able to install VMware Tools. You’d have to do the same thing on a VirtualBox system, and you’ll need that stuff for installing SAMBA anyways, so you might as well just install this stuff now and get it over with.
So now we need to actually get the SAMBA4 code, which we can do two separate ways. I’m sure in the future that SAMBA 4 is going to start becoming available in repositories so how you do this is up to you. The two options I recommend are GIT and WGET, which are outlined here:
My Debian 6 machine didn’t have GIT installed so a simple “sudo apt-get install git” solved the issue.
mkdir samba4
cd samba4
git clone git://git.samba.org/samba.git samba-master
-or-
mkdir samba4
cd samba4
From here it’s simple. If you downloaded the tarball, just extract it like this:
Now, whether you have performed either the GIT or the TAR method, you’re in the same place.
Just enter your “samba-master” or “samba-4.0.0” directory to continue.
From here we’re going to compile everything, starting with
then
then
Durring the configure, make and install you’ll see a ton of scrollback. I set my scrollback to “Unlimited” in my terminal so that I can go back through it if there are issues. I forgot that the “make install clean” needs to be run as root or you can sudo that command:
sudo make install clean
[sudo] password for steve:
WAF_MAKE=1 python ./buildtools/bin/waf install
./buildtools/wafsamba/samba_utils.py:397: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import md5
Waf: Entering directory `/home/steve/samba/samba-master/bin'
* creating /usr/local/samba/etc
* creating /usr/local/samba/private
* creating /usr/local/samba/var
* creating /usr/local/samba/private
* creating /usr/local/samba/var/lib
* creating /usr/local/samba/var/locks
* creating /usr/local/samba/var/cache
* creating /usr/local/samba/var/lock
* creating /usr/local/samba/var/run
* creating /usr/local/samba/var/run
Selected embedded Heimdal build
Checking project rules ...
Project rules pass
...
...
(scrollback omitted)
...
...
Waf: Leaving directory `/home/steve/samba/samba-master/bin'
'install' finished successfully (1m42.653s)
WAF_MAKE=1 python ./buildtools/bin/waf clean
./buildtools/wafsamba/samba_utils.py:397: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import md5
Selected embedded Heimdal build
'clean' finished successfully (0.765s)
And we’re done! Well, at least for installing this software.
You now have SAMBA 4 installed on a Debian 6 System! 🙂
total 40K
drwxr-sr-x 10 root staff 4.0K Dec 15 13:18 .
drwxrwsr-x 11 root staff 4.0K Dec 15 13:18 ..
drwxr-sr-x 2 root staff 4.0K Dec 15 13:20 bin
drwxr-sr-x 2 root staff 4.0K Dec 15 13:18 etc
drwxr-sr-x 7 root staff 4.0K Dec 15 13:18 include
drwxr-sr-x 14 root staff 4.0K Dec 15 13:19 lib
drwxr-sr-x 2 root staff 4.0K Dec 15 13:18 private
drwxr-sr-x 2 root staff 4.0K Dec 15 13:20 sbin
drwxr-sr-x 7 root staff 4.0K Dec 15 13:20 share
drwxr-sr-x 7 root staff 4.0K Dec 15 13:18 var
You probably want to make SAMBA start when your server boots, right? Well, lets get that going.
The people over at SAMBA have made this super easy. So lets get some wget action going on this.
Just use wget like this:
If that link doesnt work, I’ve posted the script on my site, here:
From here, you just need to make sure this script is executable:
Now you can add this to your init scripts:
As for configuring SAMBA 4, that’ll be in my next blog. If it’s anything like setting up and configuring a Microsoft AD Domain Controller (which I’m sure it’ll be much MORE difficult than that) then I’m sure the next blog will be pretty long…
Later!
Sources:
|
http://www.erdmanor.com/blog/tag/git/
|
CC-MAIN-2018-51
|
refinedweb
| 971
| 64.81
|
👈 Touch¶
Your Tingbot comes equipped with a resistive touch screen! It’s easy to react to touch events.
import tingbot from tingbot import * screen.fill(color='black') @touch() def on_touch(xy): screen.rectangle(xy=xy, size=(5,5), color='blue') tingbot.run()
This is a simple drawing app. It uses the
@touch() decorator to receive touch events and draws a
rectangle to the screen at the same place.
@
touch(xy=…, size=…, align=…)
This ‘decorator’ marks the function after it to receive touch events.
You can optionally pass an area that you’re interested in, using the
xy,
sizeand
alignarguments. If you specify no area, you will receive all touch events.
The handler function can optionally take the arguments
xyand
action.
xyis the location of the touch.
actionis one of ‘down’, ‘move’, ‘up’.
|
http://tingbot-python.readthedocs.io/en/latest/touch.html
|
CC-MAIN-2017-13
|
refinedweb
| 132
| 70.7
|
Python Programming, news on the Voidspace Python Projects and all things techie.
Python 2.6
Python 2.5 isn't even out of the door yet and Python 2.6 features are already being discussed.
PEP 359 discusses a new language feature planned [1] for Python 2.6.
This would add a new piece of language syntax, the make statement. It allows you to create custom (named) namespaces without all the machinery overhead of the class statement.
I often use classes to create namespaces for related attributes, so I might use this [2].
There is a danger in adding new syntax to the language. Python 2.4 and 2.5 have both done this, with decorators, generator expressions, the with statement and conditional expressions [3]. Python 2.6 already proposes to do this by changing the import semantics in the form of relative imports.
This creates two difficulties.
- When using new syntax features it is basically impossible to write code that will still work on previous versions. Even if you check the Python version at runtime, to ensure that the code is only executed on an interpreter that can handle it, the parser will still choke.
- It raises the bar for people trying to learn the language. When looking at code they have to be able to recognise more language constructs to have a chance of understanding what is going on.
Python doesn't suffer from being defined by a standards body, choking all possible improvement, but it seems to be growing rapidly. I believe that Python is in good hands, and some of the recent additions add great power in the right hands. Having said that, the reason I fell in love with Python was the emphasis it placed on readability and clarity. I hope those taking Python into new and uncharted territory remember axiom number three from the Zen of Python :
Simple is better than complex.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-04-13 17:03:10 | |
Categories: Python, General Programming
Snowcrash
I've got a few days off whilst I'm in between jobs. We move on Friday, so the next couple of days will be very hectic. In the meantime I'm doing a hasty update to one of my programs.
To accompany the headscratching and mind-numbing work, (and having finished Angela's Ashes) I'm now listening to Snow Crash. This is written by Neal Stephenson, one of the heroes of Cyberpunk.
Its theology is distinctly dubious [1], but that aside it is very interesting [2]. I'm particularly drawn to the Metaverse. This is a virtual world created by computer hackers.
It is similar to the cyberspace envisioned by William Gibson, but more of the details are fleshed out. From my point of view, the most exciting thing about it, is the ability to create new things. Stephenson doesn't describe the mechanics of virtual matter and energy, but programmers are able to create new and working objects like motorcycles.
I've searched the web [3] to look for the hackers who are creating the Metaverse. Despite the rise of the MMORPG, where people can interact in virtual worlds, I can find nothing similar. All the ones I've seen have a fixed and rigid infrastructure. Objects must be built into to the fabric of the world, so you're unable to create new ones from the inside.
I'll have to wait, or build the Metaverse myself.
Note
I was explaining to a friend recently how humans had managed to create such complex things as computers.
Computers are (currently) based on silicon wafers, with circuits containing hundreds of thousands of transistors - switches.
Despite the complexity of modern devices, they are built from combinations of circuits comprised of very simple components. At their heart, microprocessors are still descendants of mechanical devices like The Difference Engine of Charles Babbage.
The basic component of the human is the cell. A cell is a phenomenally complex device, a miniature chemical factory that operates on the quantum interaction of its inner workings. Each of us have hundreds of millions of these cells.
So despite packing untold numbers of transistors onto silicon wafers, humans are still a long way off achieving anything remotely approaching the complexity of the human body (and mind).
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-04-12 16:14:36 | |
Categories: General Programming, Fun
rest2web Markup
In working on rest2web I've realised that it would be very easy to support other markups.
The ones that spring to mind are :
Would anyone find that useful ?
I've also discovered that the current system of uservalues is broken in lots of cases [2]. To fix this I think I need to add a new syntax for including uservalues. I will probably go with :
<* uservalue *>
This allows uservalues to contain ReST and still be turned into HTML.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-04-10 09:25:18 | |
Categories: Projects, Python
rest2web Progress
There has been some tangible progress on rest2web at last.
There is an updated version in the SVN Repository.
rest2web now supports three levels of verbosity via command line options :
- -v Verbose (the default)
- -a Actions & Warnings Only
- -w Warnings Only
There is currently a bug in the handling of the file keyword when use in conjunction with the index-file keyword.
Update
The file bug is now fixed in SVN.
After this there are several new features to be added before the next release.
Other rest2web news; the Leipzig Python User Group are having a meeting on April 11th. Mike Müller is giving a talk on rest2web. Shame it's a bit far for me to go, but it should be interesting.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-04-09 15:10:14 | |
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter...
|
http://www.voidspace.org.uk/python/weblog/arch_d7_2006_04_08.shtml
|
CC-MAIN-2016-26
|
refinedweb
| 1,012
| 65.42
|
CodePlexProject Hosting for Open Source Software
Hi,
Is there a way to specify the type (for completion) of function's arguments ?
For example if I have the following function:
def myFunc(arg1,arg2):
...
Is it possible to have the code completion working for arg1 and arg2 by specifying the type somewhere?
Thanks.
There's no declarative way to do it, though we do have a feature open which you can vote on:
If you make a call to myFunc, say myFunc(1, "hello"), the analysis engine will figure out that arg1 is int and arg2 is str. Similarly if you do:
foo = MyClass()
bar = MyOtherClass()
myFunc(foo, bar)
then the analyzer will know that arg1 is MyClass and arg2 is MyOtherClass. It gets less useful when you make calls with inconsistent types though.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://pytools.codeplex.com/discussions/270130
|
CC-MAIN-2017-39
|
refinedweb
| 169
| 71.85
|
.naming.event; 27 28 /** 29 * Specifies the methods that a listener interested in namespace changes 30 * must implement. 31 * Specifically, the listener is interested in <tt>NamingEvent</tt>s 32 * with event types of <tt>OBJECT_ADDED</TT>, <TT>OBJECT_RENAMED</TT>, or 33 * <TT>OBJECT_REMOVED</TT>. 34 *<p> 35 * Such a listener must: 36 *<ol> 37 *<li>Implement this interface and its methods. 38 *<li>Implement <tt>NamingListener.namingExceptionThrown()</tt> so that 39 * it will be notified of exceptions thrown while attempting to 40 * collect information about the events. 41 *<li>Register with the source using the source's <tt>addNamingListener()</tt> 42 * method. 43 *</ol> 44 * A listener that wants to be notified of <tt>OBJECT_CHANGED</tt> event types 45 * should also implement the <tt>ObjectChangeListener</tt> 46 * interface. 47 * 48 * @author Rosanna Lee 49 * @author Scott Seligman 50 * 51 * @see NamingEvent 52 * @see ObjectChangeListener 53 * @see EventContext 54 * @see EventDirContext 55 * @since 1.3 56 */ 57 public interface NamespaceChangeListener extends NamingListener { 58 59 /** 60 * Called when an object has been added. 61 *<p> 62 * The binding of the newly added object can be obtained using 63 * <tt>evt.getNewBinding()</tt>. 64 * @param evt The nonnull event. 65 * @see NamingEvent#OBJECT_ADDED 66 */ 67 void objectAdded(NamingEvent evt); 68 69 /** 70 * Called when an object has been removed. 71 *<p> 72 * The binding of the newly removed object can be obtained using 73 * <tt>evt.getOldBinding()</tt>. 74 * @param evt The nonnull event. 75 * @see NamingEvent#OBJECT_REMOVED 76 */ 77 void objectRemoved(NamingEvent evt); 78 79 /** 80 * Called when an object has been renamed. 81 *<p> 82 * The binding of the renamed object can be obtained using 83 * <tt>evt.getNewBinding()</tt>. Its old binding (before the rename) 84 * can be obtained using <tt>evt.getOldBinding()</tt>. 85 * One of these may be null if the old/new binding was outside the 86 * scope in which the listener has registered interest. 87 * @param evt The nonnull event. 88 * @see NamingEvent#OBJECT_RENAMED 89 */ 90 void objectRenamed(NamingEvent evt); 91 }
|
http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jdk/src/share/classes/javax/naming/event/NamespaceChangeListener.html
|
CC-MAIN-2017-51
|
refinedweb
| 336
| 56.55
|
I know the headline to this post is a little bit boring: More Rules for Expressions. Honestly, this post is about code hygiene, because I will mainly write about pointers.
Let's have a look at my plan for today.
nullptr
0
NULL
I will start with a very import rule.
Let me cite the words of the guidelines: "Complicated pointer manipulation is a major source of errors.". Why should we care? Of course, our legacy code is full of functionality such as this example:
void f(int* p, int count)
{
if (count < 2) return;
int* q = p + 1; // BAD
int n = *p++; // BAD
if (count < 6) return;
p[4] = 1; // BAD
p[count - 1] = 2; // BAD
use(&p[0], 3); // BAD
}
int myArray[100]; // (1)
f(myArray, 100), // (2)
The main issue with this code is that the caller must provide the correct length of the C-array. If not, we have undefined behaviour.
Think about the last lines (1) and (2) for a few seconds. We start with an array and remove its type information by passing it to the function f. This process is called an array to pointer decay and is the reason for a lot of errors. Maybe we had a bad day and we count the number of elements wrong or the size of C-array changed. Anyway, the result is always the same: undefined behaviour. The same argumentation will also hold for a C-string.
What should we do? We should use the right data type. The Guidelines suggests using gsl::spantype from the Guidelines Support Library (GSL). Have a look here:
void f(span<int> a) // BETTER: use span in the function declaration
{
if (a.length() < 2) return;
int n = a[0]; // OK
span<int> q = a.subspan(1); // OK
if (a.length() < 6) return;
a[4] = 1; // OK
a[count - 1] = 2; // OK
use(a.data(), 3); // OK
}
Fine! gsl::span checks at run-time its boundaries. Additionally, the Guidelines Support Library has a free function at for accessing the elements of an gsl::span.
void f3(array<int, 10> a, int pos)
{
at(a, pos / 2) = 1; // OK
at(a, pos - 1) = 2; // OK
}
I know your issue. Most of you don't use the Guidelines Support Library. No problem. It's quite easy to rewrite the functions f and f3 using the container std::array and the method std::array::at. Here we are:
// spanVersusArray.cpp
#include <algorithm>
#include <array>
void use(int*, int){}
void f(std::array<int, 100>& a){
if (a.size() < 2) return;
int n = a.at(0);
std::array<int, 99> q;
std::copy(a.begin() + 1, a.end(), q.begin()); // (1)
if (a.size() < 6) return;
a.at(4) = 1;
a.at(a.size() - 1) = 2;
use(a.data(), 3);
}
void f3(std::array<int, 10> a, int pos){
a.at(pos / 2) = 1;
a.at(pos - 1) = 2;
}
int main(){
std::array<int, 100> arr{};
f(arr);
std::array<int, 10> arr2{};
f3(arr2, 6);
}
The std::array::at Operator will check at runtime its bounds. If pos >= size(), you will get an std::out_of_range exception. If you look carefully at the spanVersusArray.cpp program, you will notice two issues. First, the expression (1) is more verbose than the gsl::span version and second, the size of the std::array is part of the signature of the function f. This is really bad. I can only use f with the type std::array<int, 100>. In this case, the checks of the array size inside the function are superfluous.
To your rescue, C++ has templates; therefore, it's easy to overcome the type restrictions but staying type-safe.
// at.cpp
#include <algorithm>
#include <array>
#include <deque>
#include <string>
#include <vector>
template <typename T>
void use(T*, int){}
template <typename T>
void f(T& a){
if (a.size() < 2) return;
int n = a.at(0);
std::array<typename T::value_type , 99> q; // (4)
std::copy(a.begin() + 1, a.end(), q.begin());
if (a.size() < 6) return;
a.at(4) = 1;
a.at(a.size() - 1) = 2;
use(a.data(), 3); // (5)
}
int main(){
std::array<int, 100> arr{};
f(arr); // (1)
std::array<double, 20> arr2{};
f(arr2); // (2)
std::vector<double> vec{1, 2, 3, 4, 5, 6, 7, 8, 9};
f(vec); // (3)
std::string myString= "123456789";
f(myString); // (4)
// std::deque<int> deq{1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
// f(deq); // (5)
}
Now, the function f works for std::array's of different sizes and types (lines (1) and (2)) but also for an std::vector(3) or an std::string (4). This container have in common, that their data is stored in a continuous memory block. This will no hold std::deque; therefore, the call a.data() in expression (5) fails. An std::deque is a kind of doubly linked list of small memory blocks.
The expression T::value_type (5) helps me to get the underlying value type of each container. T is a so-called dependent type because T is a type parameter of the function template f. This is the reason, I have to give the compiler a hint that T::value_type is actually a type: typename T::value_type.
This is obvious: A symbolic constant says more than a magic constant.
The guidelines starts with a magic constant, continuous with a symbolic constant, and finishes with a range-based for loop.
for (int m = 1; m <= 12; ++m) // don't: magic constant 12
cout << month[m] << '\n';
// months are indexed 1..12 (symbolic constant)
constexpr int first_month = 1;
constexpr int last_month = 12;
for (int m = first_month; m <= last_month; ++m) // better
cout << month[m] << '\n';
for (auto m : month) // the best (ranged-based for loop)
cout << m << '\n';
In the case of the ranged-based for loop, it is not possible to make an off-by-one error.
Let me directly jump to the rule ES.47. I want to put the rules for conversion including ES.46 in a separate post.
There are a lot of reasons to use a nullptr instead the number 0 or the macro NULL. In particular, 0 or NULL will not work in generic. I have already written a post about this three kinds of null pointer. Here are the details: The Null Pointer Constant nullptr.
How many explicit casts do we have in modern C++? Maybe your number is 4 but this is the wrong number. In C++11 we have 6 explicit cast. When I Include the GSL, we have 8 explicit casts. I will write about the 8 casts in the next86
All 1580622
Currently are 135 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more...
|
http://www.modernescpp.com/index.php/c-core-guidelines-rules
|
CC-MAIN-2019-09
|
refinedweb
| 1,125
| 75.91
|
Building a Spam Message Classifier and making an application of it deployed on Heroku
Today we will be building a neat bare bones Spam Message Classifier, a Natural Language Processing based model. Then we will build a flask application which will render an HTML based home page and a prediction page. The user will input text in the home page and the application will predict whether it seems like a Spam message or a Ham (not spam) on the prediction page. This flask API will be deployed in public host on Heroku. Heroku is the obvious choice here as it’s super quick and super easy. And did I mention it is free? Yes it is! Check out and play with the app yourself first by clicking on the button below to know what you’ll be building.
So now that you’ve seen it, let’s dive right in!
The Model
The first step will be to build our NLP model that takes in natural text from user and predicts whether it is a Spam or Ham(not a spam). We will do this in a few pieces of python code. The complete code and the dataset is available on my Github. First, let’s import all the libraries required.
import pandas as pd #Pandas for data pre-processing import pickle #Pickle for pickling (saving) the model from sklearn.feature_extraction.text import CountVectorizer #To Vectorize the textual data from sklearn.naive_bayes import MultinomialNB #The algorithm for prediction from sklearn.externals import joblib #Alternative Usage of Saved Model from sklearn.model_selection import train_test_split #Validation split
Next, we need to load the data and split into text data (X) and labels (y).
df = pd.read_csv("spam.csv", encoding="latin-1") df.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'], axis=1, inplace=True) #Some garbage features that need to be removed # Text and Labels df['label'] = df['class'].map({'ham': 0, 'spam': 1}) X = df['message'] y = df['label']
Now we need to vectorize (convert text to numerical form) the textual data by using Count Vectorizer. Then just split the training and testing data. No further text preprocessing is taken into consideration as our aim is to build a model quickly and deploy it. You can experiment with the text cleaning part for yourself to make the performance better.
# Extract Feature With CountVectorizer cv = CountVectorizer() X = cv.fit_transform(X) # Fit the Data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Once this is done, we just need to fit the model and save it as a pickle file which will later be loaded in the flask app.
#Naive Bayes Classifier clf = MultinomialNB() clf.fit(X_train,y_train) print(clf.score(X_test,y_test)) #Alternative Usage of Saved Model joblib.dump(clf, 'NB_spam_model.pkl') joblib.dump(cv, 'cv.pkl')
Notice here that we have saved the Naive Bayes’ model and the count vectorizer as well. This is because for your the text input which will be taken from user, will have to be vectorized using the same Count Vectorizer parameters as the model was trained on. Okay, so now we have our model and the vectorizer as well.
Next we need to build the flask app which will connect the NLP model we just built to our browser and later host it on cloud.
The Flask App
The flask application will route the user to the home page as soon as user accesses the URL of the application. Later when user inputs the text and hits the predict button, it will redirect it to the prediction page and render the output. Let’s see how it’s done.
We need to import flask and a few other functions. You would need to install flask by a simple pip install command if you don’t already have it.
from flask import Flask, render_template, request
Here, Flask will be used to define the app which we are using. Then render_template will render the home page (home.html) and the prediction page (result.html). And the request will be used to retrieve the text input which will be sent to the flask app via the HTML Post method from the HTML page.
Now we need to define the flask app and its default routing to the home page, which is home.html. We will define a function will do nothing but render the home page when the application URL is called.
app = Flask(__name__) @app.route('/') #Routes the app to below task when the URL is called def home(): return render_template('home.html')
Now comes the main part of the app, the prediction. We need to define a function that opens and loads the NLP model and Count Vectorizer pickle files. It then retrieves the text input which is sent to it via the POST method. The retrieved text input is then vectorized using the same vectorizer and dumped to our model to get the prediction. That’s it! We now just need to render this prediction result back on the prediction page and we’re done.
@app.route('/predict',methods=['POST']) def predict_fun(): NB_spam_model = open('NB_spam_model.pkl','rb') clf = joblib.load(NB_spam_model) cv_model = open('cv.pkl', 'rb') cv = joblib.load(cv_model) if request.method == 'POST': message = request.form['message'] data = [message] vect = cv.transform(data).toarray() my_prediction = clf.predict(vect) return render_template('result.html',prediction = my_prediction) #Calling the main function and running the flask app if __name__ == '__main__': app.run(debug=True)
Our model and flask apps are done. We now just need to deploy these files on Heroku.
Deployment
To deploy the application on Heroku you need to push all the files on your Github which will be linked to Heroku. You will need to upload following files to your repository :
- NLP model pickle
- Count Vectorizer pickle
- Flask app
- HTML & CSS files
- Requirements file
- Procfile
Okay so you might be wondering what requirements and the procfile files are. The requirements is a text file which contains all the dependencies (libraries like pandas, flask etc.) and their versions required for the app to run as expected. Heroku will install all these dependencies prior to deploying the app. Procfile is a configuration file which looks something like:
web: gunicorn app1:app
Here ‘app1’ is my flask file name which is ‘app1.py’ and ‘app’ is the flask app we defined earlier in the code above. This just tells Heroku to look for the flask file named app1 and in it the flask app defined as ‘app’.
Once you’re done pushing all the files to a new github repository, you will need to create an app on Heroku and connect your github repository to it. To do this, just go to Heroku. Create an account if you haven’t already and go to ‘create new app’.
In the newly created app, you need to connect to the github repository which has the code. Search for the github repository name in the search box and hit the ‘Connect’ button.
Once connected you just need to hit the ‘Deploy’ button and you’re done! Heroku will first check for the requirements file and install all the dependencies and then deploy the flask app to public host. A URL will be generated which is the URL of your application. Just copy it and open it in your browser and voila! you’re done.
That was it for this tutorial. Hope you got the app running and got decent results!
If you got this far and have the app ready, give yourself a pat on the back. You deserve it!
So we used Heroku Cloud here as it’s so quick and easy to setup. However I encourage you to try deploying the app on other cloud platforms as well – like AWS EC2, Google Cloud Platform and Azure.
Feel free to get the source code from my github and reach out to me for any queries and suggestions on LinkedIn.
See you on the next post, Cheers!
Image Credits
Featured Image: Unsplash
In-Post Photos: Photos taken by author, June 2020
|
https://www.aitimejournal.com/@chayan.kathuria/build-deploy-a-spam-classifier-app-on-heroku-cloud-in-10-minutes
|
CC-MAIN-2021-10
|
refinedweb
| 1,343
| 74.19
|
How do I map xs:date to java.util.Date?
santacruz40 wanted JAXB 2.0 XJC to map xs:date to java.util.Date. Here's how to do it.
The easiest way to do this is to simply modify the generated code. Just find out all the references to XMLGregorianCalendar and replace them with Date. With a modern IDE, this is surprisingly easy. But this only works if your schema doesn't change too often, for you don't want to do this too frequently.
If you'd rather have XJC generate the right thing for you, then you need to writea customization:
The javaType customization takes care of this. You can either copy the
Now, this maps xs:date to Calendar. To map this to Date, you need to define a pair static methods that convert from/to XML string and Date. Fortunately, that can be done relatively easily by using the above parse/printDate functions:
public class DateAdapter {
public static Date parseDate(String s) {
return DatatypeConverter.parseDate(s).getTime();
}
public static String printDate(Date dt) {
Calendar cal = new GregorianCalendar();
cal.setTime(dt);
return DatatypeConverter.printDate(cal);
}
}
Then replace the parse/printMethod attributes of the javaType customization with these methods.
- Login or register to post comments
- Printer-friendly version
- kohsuke's blog
- 51749 reads
I ran into the mapping problem when marshaling recently and ...
by zachncst - 2012-06-01 11:44
I ran into the mapping problem when marshaling recently and I immediately began to think I needed to add code to do the translation. All google results indicated I had to do more work. However, I didn't notice that the XMLGregorianCalendar had a getXMLSchemaType method. This lead me to believe that it was aware of what type it was translating from to. And in fact it is, this may have been the only solution in the past but if you are seeing XMLGregorianCalendar having problems you can just set certain columns of it to get the time you need. For example: xs:date is year, month and day only. If you're experiencing this problem in 2012, look at the documentation for the class. It helps.
Problem with XMLGregorianCalendar field in SOA request
by infoo - 2009-09-03 04:12We in the client code, till the control reaches the proxy class timestamp values is present in the request. But when the xml request is getting constructed(which we have logged in txt file by modifying clientconfig.xml) the date value is present but time value is getting dropped We had actually written the WSDL and had used axis2 eclipse plugin to generate request/response objects ,service & client stubs from it As our WSDL is already in production, please let us know if we have some sort of work around, to modify the Webservice client code without going for WSDL change. It is also not possible to tamper the request objects as they are generated from WSDL and are packaged as jar file and placed in production If in XML request, we are able to construct the time stamp, the service (generated from WSDL) is able to capture date time values. This was evident by constructing the timestamp request via SOAP UI / Lisa tool This workaround fix is required for time being which is very critical.. Thanks in advance.
|
https://weblogs.java.net/node/235094/atom/feed
|
CC-MAIN-2015-32
|
refinedweb
| 552
| 61.97
|
PDF contains unstructured data and making it meaningful or structured is a challenging task. It contains much useful Information that If you make a predictive or NLP model then it will beneficial to you. Currently, There are many libraries that allow you to manipulate the PDF File using Python. Like extracting text, tables, images and many things from PDF using it. These are also used in doing text analysis. In this entire tutorial of “How to,” you will learn how to extract text from PDF File using Python.
Step By Step Guide to Extract Text
Step 1: Import the necessary libraries
Although there are many libraries available for extracting text from PDF File. Here for the demonstration purpose, I am using PyPDF2.
import PyPDF2
Step 2: Open the PDF File
Now using the PYPDF2 you will Open the PDF File in RB(reading in bytes) mode.
# open the pdf file pdf_file =open('data/FOMC_report.pdf', 'rb')
Step 3: Read PDF and Check for Encryption
After opening the file Read the PDF File using PyPDF2.PdfFileReader() method and check for encryption using getIsEncrypted() method. It is a must as with encryption you cannot read the PDF File and extract the text. Use the Code Below.
# read pdf read_pdf = PyPDF2.PdfFileReader(pdf_file)
#check pdf is encrypted or not read_pdf.getIsEncrypted()
# no of pages read_pdf.numPages
Step 4: Extract the text
After knowing the number of the pages, you can extract text from it using the getPage() and extractText() method. The getPage() method will first get the page number of the Pdf file and extractText() will extract the text from that page number. In our example lets say I want to extract text from page number 1 then I will use the following code.
# extract text from page number 1 page1 = read_pdf.getPage(0) page1.extractText()
Before Splitting
If you see the output then a new line is replaced with \n. Now you can easily split the sentence using split(‘\n’) method. It will convert the extracted text to the list.
After Splitting
Conclusion
Converting Unstructured Text data from PDF to structured data is beneficial for you if you want to use Natural Language Processing (NLP). After extracting text data from PDF you can do anything like text preprocessing, word anagrams e.t.c.
Hope this post has solved your query on how to extract text from PDF File using Python. Please contact us if you have any query regarding anything. We are always ready to help you.
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
|
https://www.datasciencelearner.com/how-to-extract-text-from-pdf-file-using-python/
|
CC-MAIN-2020-50
|
refinedweb
| 433
| 74.29
|
*****************************************Arduino Code*************************************************
/*
Example program for using sleep modes and watch dog timer in Arduino. This example code was used in a sleep mode tutorial video on the ForceTronics YouTube Channel.
This code is open for anybody to use at their own risk*/
/*WDT BYTE variables for setting timer value
WDTO_15MS
WDTO_30MS
WDTO_60MS
WDTO_120MS
WDTO_250MS
WDTO_500MS
WDTO_1S
WDTO_2S
WDTO_4S
WDTO_8S */
#include <avr/sleep.h>
//We use part of the WDT library, but have to use registers as well since library does not support interrupt mode for WDT
#include <avr/wdt.h>
int led = 13; //variable for pin that the LED is on
int tog = 1; //variable that toggles between traditional delay() function and WDT sleep delay function
void setup() {
wdt_disable(); //Datasheet recommends disabling WDT right away in case of low probabibliy event
pinMode(led, OUTPUT); //set up the LED pin to output
}
void loop() {
if(tog) { //use traditional delay function
digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait
digitalWrite(led, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait
tog = 0; //toggle variable
}
else { //after blinking LED setup interrupt and then go to sleep. Note that sleep will only happen once sinc
digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // turn the LED on (HIGH is the voltage level)//
digitalWrite(led, LOW); // turn the LED off by making the voltage LOW
delayWDT(WDTO_1S); // Use WDT sleep delay function, argument is byte variable from WDT Library
//delayWDT(0x06); //Use WDT sleep delay function, argument is byte value that sets timer to 1 second
tog = 1; //toggle variable
}
}
//This function serves as a power saving delay function. The argument is a Byte type variable that is used to set the delay time
//The function sets up sleep mode in power down state. The function then sets up the WDT timer in interrupt mode and sets it.
//It then puts the Arduino to sleep for the set time. Upon wake up the WDT and sleep mode are shut off
void delayWDT(byte timer) {
sleep_enable(); //enable the sleep capability
set_sleep_mode(SLEEP_MODE_PWR_DOWN); //set the type of sleep mode. Default is Idle
ADCSRA &= ~(1<<ADEN); //Turn off ADC before going to sleep (set ADEN bit to 0)
WDTCSR |= 0b00011000; //Set the WDE bit and then clear it when set the prescaler, WDCE bit must be set if changing WDE bit
WDTCSR = 0b01000000 | timer; //Or timer prescaler byte value with interrupt selectrion bit set
// WDTCSR = 0b01000110; //This sets the WDT to 1 second
wdt_reset(); //Reset the WDT
sleep_cpu(); //enter sleep mode. Next code that will be executed is the ISR when interrupt wakes Arduino from sleep
sleep_disable(); //disable sleep mode
ADCSRA |= (1<<ADEN); //Turn the ADC back on
}
//This is the interrupt service routine for the WDT. It is called when the WDT times out.
//This ISR must be in your Arduino sketch or else the WDT will not work correctly
ISR (WDT_vect)
{
wdt_disable();
MCUSR = 0; //Clear WDT flag since it is disabled, this is optional
} // end of WDT_vect
|
http://forcetronic.blogspot.com/2014/
|
CC-MAIN-2017-22
|
refinedweb
| 508
| 52.97
|
Getting Started with React Native in 2019: Build Your First App
Aman Mittal—
January 04, 2019
Credit:. There are different frameworks that. Pros such as being less-time consuming and cost-effective and cons include performance issues. are you going to learn?
In this tutorial, you will learn the following things:
- What is React Native?
- Setting up the Development Environment
- Use React Native CLI
- Run a React Native App
- What is App.js?
- Hot Reloading
AppRegistry
- Build your first React Native App
ViewComponent
StyleSheetObject
TextComponent
- Create a list with
FlatList
- Learning Path for React Native
What is React Native?
In a nutshell, React Native allows you to build mobile applications that look, feel and perform much more like native applications. It uses the same fundamental UI building blocks as regular iOS and Android apps. You just put those building blocks together using JavaScript and React. Good thing for developers is that they can use almost the same concepts that are being used for building web applications.
If you are familiar with Reactjs or come.
In simple words, React Native brings the React to mobile app development. Its goal isn’t to write the code once and run it on any platform. The main goal here is to learn once and write-anywhere. An important distinction to make. React Native is still relatively new, as at the time of writing this post, it is in its version
0.57.
Pre-requisites: Setting Up Development Environment
To dive deeply in React Native’s ecosystem, we need to install a few things first to get started. Let us go through one of them.
Nodejs & Watchman
React Native uses Node.js, a JavaScript runtime, to build your JavaScript code. If you do not already have Node.js installed, it’s time to get it from its official website here. I recommend installing LTS (long-term support)
10.x.x version which is also I am using personally.
Watchman is a tool developed by Facebook for watching file changes. It is highly recommended you install it for better performance. For Mac users, You will need the
homebrew macOS package to install
watchman:
brew install watchman.
For Windows users, there is no
watchman so you can skip this step but you need to have Nodejs as well as
python2 as React Native's recent version requires it.
Lastly, everyone (irrespective of the OS you are using) need to install theJava SE Development Kit (JDK) that can be found here. Make sure the version you install is at least or more than
>= 8.
Native SDKs
For macOS developers, you can install Xcode which is free to develop iOS applications.
If you want to develop for Android, setting up its development environment can be a bit tedious if you are new to this. You will be installing Android Studio which is a completely free tool to develop Android apps in its native language. You will be installing a list of utilities for this process and then setting path variables for the first time, so I am recommending to go through the exact link here which are official setup instructions provided by Facebook.
React Native CLI
Once you are done with the development environment setup process and necessary tools, you can take a deep breath right now. We are going to start building our first REACT NATIVE APP. In order to start, we need one more tool. Using
npm (a package manager, which you installed using Node.js) you are now going to install
react-native-cli. Open your terminal and run the following command.
npm install -g react-native-cli
This CLI tool is used to scaffold a starter project containing everything you need to build and run a React Native app.
npm installs this CLI tool as a global module.
react-native --version## outputreact-native-cli: 2.0.1
To verify that the installation process was a success, you can run the command below and it will output you the current version of the CLI tool.
Running a React Native App
To get started we need to create a project directory using the CLI tool just installed. Open up your terminal and run the following.
react-native init EmojiDictRN
You can name it whatever you want. Once the process is done, traverse inside the project directory. You will be welcomed by a set of files like below.
From above let us now take a brief look at the files or directories that are essential for us to understand:
- App.js the first file in any React Native app that is the entry point of the app development process. Whatever you write inside this file, it will get displayed on the mobile device.
- node_modules/ is a folder which contains all the dependencies (or packages) that are used to develop and run this application.
- index.js is the entry point to trigger the app on a device or simulator
- ios is the folder containing an Xcode project and the code required to bootstrap this app for iOS devices
- android is the folder containing android related code to bootstrap this app for Android devices
- package.json where every dependency installed gets listed
You can ignore the other files as of now.
Running the Application
The
react-native-cli tool comes with some default snippets of code. To see it in action, you will have to run the application using a terminal. I am going to use an iOS simulator and an Android emulator for this purpose. Windows developers can ignore the iOS part.
npm start
Do note that, we have not made any changes in the source code of the application. To run the app, we need to trigger the below command first.
This will start the metro bundler to watch for any file changes in a
.js file in your project. Make sure this command is running in a separate terminal window or a tab when you are building your project for
iOS or
Android.
Running on iOS
To run the app with whatever current content it has on an iOS simulator, you can run the following command below in the second terminal window.
react-native run-ios
This command builds your app and starts it on iOS simulator. This process consumes some good amount of time when building the necessary iOS files for the first time for any React Native app. It will also open up a simulator device for you like below when the process is done.
This iOS simulator is the default one with current Xcode version you have. However, you can run any sim device by adding a flag. By running the command:
xcrun simctl list devices you can check out which devices you have available as simulators.
The last
Booted or
Shutdown against each listed device in above image tells you which devices are currently running. To build and run for another device, you can run the following command.
react-native run-ios --simulator="iPhone 8 Plus"
where
"iPhone 8 Plus" is the value that you can look up through the last command I mentioned.
Running on Android
You will need an Android device to run your React Native Android app. This can be either a physical Android device or more commonly, you can use an Android Virtual Device which allows you to emulate an Android device on your computer.
If you wish to run it on a real device, you can follow the complete set of instructions here. For running on an Android emulator, open the Android Studio, and choose the option to ‘open an existing project/folder’. Once the project gets opened and is indexed, you will see an icon looking exactly like below image in the right corner.
This is an option for enabling an Android Virtual Device (AVD). If you have just installed Android Studio, you will likely need to create a new AVD. After the virtual device is running, you can run the command
react-native run-android from your terminal window to open up the application.
How to Modify App.js?
To see the app in action on both the devices, let us modify
App.js with the code below.
import React, { Component } from 'react'import { Platform, StyleSheet, Text, View } from 'react-native'export default class App extends Component {render() {return (<View style={styles.container}><Text style={styles.instructions}>Hello World!</Text></View>)}}const styles = StyleSheet.create({container: {flex: 1,justifyContent: 'center',alignItems: 'center',backgroundColor: '#F5FCFF'},instructions: {textAlign: 'center',color: '#333333',marginBottom: 5}})
The result of the following modification can be seen if you press
Cmd + R on iOS and double
R on Android.
Enable Hot Reloading
Hot Reloading feature in react native application helps to display any updates occur in UI, whenever you save anything in react native app-code. On enabling this feature, you do not have to press
Cmd + R on iOS and double
R on Android again for seeing the changes on the UI you just made.
To enable this feature, all you have to do is press
Ctrl + M/Cmd + M depending on your OS, and select Enable Hot Reloading from the popup menu that appears as shown above.
What is AppRegistry?
The file that renders this App component is
index.js in the root directory which has the following code.
/** @format */import { AppRegistry } from 'react-native'import App from './App'import { name as appName } from './app.json'AppRegistry.registerComponent(appName, () => App)
AppRegistry is the entry point to run a React Native application. App component or any other root component in the app should register by using
AppRegistry.registerComponent such that the native system can load the bundle of the app and run the app by starting
AppRegistry.runApplication.
You can read more about
AppRegistry in-detail here.
Baby Steps: First React Native App
In this section, you are going to build your first React Native app. To get started, we have already generated a React Native project using the cli tool. The only thing now you have to understand is what are components?
Components are the visual elements that you see on the screen in a React Native app. There are several components made available for you to use by the React Native core. To understand this better, we can categorize these components in six broad categories:
- Basic or Core components such as
View,
Text,
Image,
ScrollView,
TextInput,
StyleSheet
- List components such as
FlatListand
SectionList
- User Interface or Form Control components such as
Picker,
Slider,
Button,
Switch
- iOS Specific components such as
ActionSheetIOS,
SegmentedControlIOS,
AlertIOS,
PushNotificationsIOS
- Android Specific components such as
DatePickerAndroid,
TimePickerAndroid,
ViewPagerAndroid,
ToastAndroid,
PermissionsAndroid
- Other/Miscellaneous components such as
Alert,
Animated,
CameraRoll,
Dimensions,
Clipboard,
StatusBar,
Linking,
Keyboard,
ActivityIndicator,
WebView, and
Modal
Getting in detail about each of them is out of the scope of this article and will be a tedious way to learn things initially. Instead, we are going to use a project-based approach to learn your way through them. There are many more components and APIs available in React Native core that you can look at the official documentation and from time to time you will need to.
What are we building?
You are going to build a small application to just familiarize yourself with basic components. The app is shown in the image below is going to be the end result.
The above is nothing but a list of text coming directly from the component’s state. Create a new
src/components directory in the root of the project and inside
components/ create a new file called
EmojiDict.js with the following snippet of code.
import React, { Component } from 'react'import { View, Text, StyleSheet } from 'react-native'class EmojiDict extends Component {state = {'😃': '😃 Smiley','🚀': '🚀 Rocket','⚛️': '⚛️ Atom Symbol'}render() {return (<View style={styles.container}><Text>{this.state['😃']}</Text></View>)}}const styles = StyleSheet.create({container: {flex: 1,justifyContent: 'center',alignItems: 'center'}})export default EmojiDict
Accordingly, we have to modify
App.js file in order to display the result of this component.
import React, { Component } from 'react'import EmojiDict from './src/components/EmojiDict'export default class App extends Component {render() {return <EmojiDict />}}
Now if you take a look at the simulator screen you will see the following result.
What is going on? Take a look at the
EmojiDict file first. We are importing basic components from React Native. We start by declaring a
View component, which is the basic building block in React Native file. It maps to fundamental native iOS (
UIView) and Android (
View) components, hence the name. You can think this component as mere
div element from HTML where all other elements are placed inside. Hence, a
View component can contain nested components.
View component puts a container element that supports layout styling with
flexbox and other styles through CSS. We are providing styles to View via
StyleSheet. Hence, you can say that
View components are primarily used for styling and layout of children elements.
StyleSheet use
StyleSheet.create() method.
The way we have defined styles by creating an object is the preferred way. Not only it helps you organize styles and keep them separate, but these styles when defined in this manner are also sent through the native render bridge only once.
The
Text component is in many ways just like the
View component, except that it is specifically available to display text. Also, like the
View component, it supports styling. Right now we are using
flexbox to style and center the anything inside the
View component.
Flexbox is an algorithm to specify the layout for a component for its children to follow the same pattern. Suppose if we modify it as below:
render() {return (<View style={styles.container}><Text>{this.state['😃']}</Text><Text>{this.state['🚀']}</Text></View>);}
You will get the following result on refreshing the simulator.
The way we are creating a list of emojis isn’t a pragmatic approach to handle data whether it comes from a third party API or manage by the component’s state and render it as a list as we do above. Let us convert our simple view into
FlatList.
<FlatListcontentContainerStyle={styles.container}data={[{ key: '😃', value: '😃 Smiley' },{ key: '🚀', value: '🚀 Rocket' },{ key: '⚛️', value: '⚛️ Atom Symbol' }]}renderItem={({ item }) => <Text>{item.value}</Text>}/>
FlatList is cross-platform, by default vertical way to display a list of data items. It requires two props:
data and
renderItem.
data is the source of information for the list.
renderItem takes one item from the source and returns a formatted component to render. Styles that can be applied to a
FlatList component is done by the prop
contentContainerStyle that accepts the value of
Stylesheet object. What we have above is the simplest version of flatlist. Moreover, FlatList in React Native has support to pull to refresh interaction and horizontal display mode.
This completes our first React Native App. I am sure, you might have learned a thing or two. It is a just a basic component that renders a list of items.
More on Learning React Native
With lack of up to date resources or not many resources you will find on React Native concretely, I urge to stick with you learn by doing process and get as much as hands on experience in this field as you can. I did struggle when I started learning React Native, coming from a Web Development background., today learned about using
FlatList component in this article., take a look at:
- ES6 Features
- ReactJS Components API
-.
Originally published at Level up coding
|
https://amanhimself.dev/getting-started-with-react-native-in-2019-build-your-first-app/
|
CC-MAIN-2020-10
|
refinedweb
| 2,571
| 64.2
|
Opened 16 years ago
Closed 16 years ago
Last modified 16 years ago
#1940 enhancement closed fixed (fixed)
make Twisted tests runnable from within a zipfile
Description
I have one project on hand that needs to load Twisted from a zipfile, and reliability is important, so I want to be able to run unit tests in this deployed configuration for the behavior I'm going to be using.
This creates several problems. First of all, a few tests that I'd really like to run in the context of this project use file to refer to sibling files, and that means that there needs to either (A) be a resource-loader in Twisted so that the tests can be converted to a different idiom, or (B) the trial test-discovery mechanism needs to honor
__path__. Ideally both of these things would be true.
I'm going to be starting on a branch for this tonight, and I believe it will also provide a slightly more general solution to #1276 than James wrote in his branch (since it wasn't addressing zip files). I've reviewed the issue as carefully as I can, because there are a bajillion extremely subtle issues involved.
First, I'm going to create a new twisted.python module, "pyspace", which abstractly represents the Python
sys.path object space that it seems like most Twisted developers think in terms of these days. There are lots of ad-hoc ways to inspect this, and trial's
TestLoader is probably the most complete, but there's no abstract object to look at. I looked at setuptools to make sure that I understood if I was replicating any behavior, and it doesn't appear to have anything like this either; it has ways to inspect the "distribution" space, but not so much just plain old sys.path.
Here's my proposed API.
pyspace.top will be a singleton with
__getitem__(packagename),
__iter__, and
walk() methods.
top['twisted'] will return an object with this same interface (although quite likely a different implementation),
__iter__ will yield a sequence of these objects, etc. Each will also have several utility attributes giving metadata about the represented module (or package), and a "load" method which returns the imported thing.
Ideally this would start with a very simple implementation based on the filesystem, in this branch grow a zip-compatible extension, and eventually get very sophisticated, extracting all available metadata (lists of classes, functions, methods, attributes) without evaluating anything, using similar techniques to what pydoctor does.
To give a simple example, disentangled from the mess of complexity that is trial, imagine a hypothetical 'module lint' tool, designed to provide you with a list of non-stdlib, non-packaged modules on your python path.
import os stdlibdir = os.path.dirname(os.__file__) from twisted.python.pyspace import top for module in top: if module.sysPathEntry != stdlibdir and '.' not in module.name: print 'Oops, unpackaged module: ', module
JP and I went through all the myriad use-cases for supporting
__path__ and eventually decided that it wasn't worth it in the general case, so we are going to support it only for already-loaded packages, which is to say Twisted, which is to say, the only module that we are ever likely to care about
__path__ for in any practical scenario (after the repository is internally split up).
This might be revised, but implementing support for some form of
__path__ can easily be done under the covers of the implementation; the interface, and trial (and friends') use of it can remain unchanged.
Change History (25)
comment:1 Changed 16 years ago by
comment:2 Changed 16 years ago by
(please insert "parts of" before "this proposal" in above comment). Clearly there are many parts of this proposal that are not in any way covered by the pkg_resources API.
comment:3 Changed 16 years ago by
I disagree for the following reasons, but you're right, I should have addressed
pkg_resources right off the bat because it's superficially similar.
- I am talking about a module which mainly manages dealing with Python objects and Python hierarchies.
pkg_resourcesdeals primarily with resources. Now, it would be possible to implement this in terms of the
pkg_resourcesAPI rather than a trivial application of filesystem and zipfile operations, but I don't feel that it's going to be a win, since the additional dependency is a pain for anyone. For example:That last bit is the most telling. I don't know whether cfsupport is a package or not without a dozen other method calls; ".svn" is not something meaningful to me in this context. The output isn't worse than listdir, but it's not much better, either. I feel that this is the most compelling reason.
>>> pkg_resources.resource_exists("twisted", "internet/serialport") False >>> pkg_resources.resource_exists("twisted", "internet/serialport.py") True >>> pkg_resources.resource_exists("twisted", "internet/serialport.pyc") True >>> pkg_resources.resource_exists("twisted", "internet/serialport.pyo") False glyph@alastor>>> pkg_resources.resource_listdir("twisted", "internet") ['.svn', 'cfsupport', (...)]
- For the same reason,
pkg_resources's listing methods do not seem to honor
__path__in the normal case. While our own support of it will initially be fairly hacky and ad-hoc, and work only for one particular use-case (which, oddly, happens to be the same on my weird-o little linux machine and for the Twisted split, e.g. #1276). While I don't have to solve that problem at the same time, but it seems like it would be nice if I could. Also,
pkg_resourcescan't really add support for other types of Python objects, which is where I'd really like this to go.
- The motivation for me to start thinking about this involved a project for deployment on an embedded device, where installing all of setuptools and dealing with the additional import overhead of eggs and a good deal of metadata that I don't need is actually a problem. If
pkg_resourceswere actually the right way to deal with this in a "normal" Python installation situation then I'd probably write something else for this particular project and propose something using
pkg_resourcesfor Twisted, but it isn't for other reasons. I am probably going to end up re-implementing a trivial subset of the functionality offered by eggs, but I don't see that as a bad thing.
- it's a trivial subset; a good deal of setuptools is geared towards doing things like dependency verification.
- it's a non-conflicting subset; the package listing stuff I'm working on can be (and probably will be, in short order) made to work with the egg importer as well as the zip importer, which will likely make twisted more amenable to packaging as an egg.
- Twisted has its own zipfile-stream implementation which would be easy to use to implement an equivalent to
get_resource_streamwhich did not stuff the entire contents of a zip chunk into a StringIO. Again, mumble, embedded device, limited memory, IO bandwith etc, I don't want to read any more data than I have to. Honestly we should probably contribute this to the standard library so that setuptools can eventually do the same thing, but it will be a good 1.5 years before it would be realistic for me to use it in that way, and I'd really like the streaming functionality before then.
To reiterate, although I like inventing things, this isn't NIH. It will be used for radically different things from
pkg_resources, although it may also be used for a few of the same things. Eggs will work unmodified with the code I plan to write with no additional changes, and I consider that a good thing. They are using the same underlying APIs (the zip importer API and the new import hooks API) and so I don't forsee any risk of divergence.
Ultimately the only use for
pkg_resources in this implementation would be to remove about 10 lines of code that called functions on importer objects; and if you take a look at
help(sys.path_hooks[0]) on any python interpreter you will see that most of the interesting methods are just exposed there anyway.
comment:4 Changed 16 years ago by
The API pkg_resources provides is not all trivially implementable on top of the python loader API; it provides additional capabilities. If you write another loader, you can also register a pkg_resources provider to provide those capabilities for your loader. This is much more likely to happen than for somebody to add that capability to twisted's copy of the API, especially so once it becomes included with python.
Anyhow, even if we don't want to depend on it until it _is_ part of python, I think it would be at least a good idea to use the same API, from all twisted code that needs to access resources. That's mostly tests, and also this thing you're writing. Whether or not the API actually comes from the "pkg_resources" module can be modified later.
Also, there's one part of that is somewhat non-trivial, and that we almost certainly will need: get_resource_filename and the related extraction of stuff from the zipfile into a temporary directory hierarchy.
Finally, please look at pkgutil.iter_modules, and pkgutil.walk_modules. Which _does_ handle path.
comment:5 Changed 16 years ago by
The de-facto python loader API is the extended one provided by the zip importer; some of its methods are even mentioned (although oddly not required) by the PEP. I am just going to call the zip importer's methods for now, and perhaps optionally call down into
pkg_resources' API later, to avoid the problem you describe - I don't want people to have to write implementations of the backend for Twisted, but going through pkg_resources isn't any easier, it just provides a layer of indirection if there were a bunch of really interesting things that people were doing with importers (which they aren't, at least currently).
The
pkgutil functions you mention are available only in Python 2.5, and in any event they walk around importing every package they find, which I explicitly want to avoid, since it caused problems with the last implementation of #1276. I do wish that
get_importer existed in 2.3, though.
For now, I'm going to consider the resources stuff a separate issue and focus on making the object-space-traversal right. I don't like the
pkg_resources API for that either, but for completely different reasons which may or may not be spurious and it would be a waste of time to discuss right now. If
__path__ support works right and I can put the test-cases in a filesystem directory, I'll be perfectly happy for my current use-cases.
comment:6 Changed 16 years ago by
We can backport python functionality; we've done it before, we should do it again if useful. If you want get_importer in 2.3, simply add it to pkgutil in 2.3.
I know it's not as sexy as an object space thingamabob, but to accomplish the title of this bug, all that's really *needed* is walk_modules (to find the modules), and a resource API (to allow tests to find associated data files).
So if that's not what you want to work on, which it sounds like it's not, maybe you should change the title. :)
comment:7 Changed 16 years ago by
All that's needed is:
- a different implementation of
walk_modulesthat does not use
__import__, but tries to respect
__path__.
- a re-implementation of
TestLoaderon top of that.
and that is basically what I'm doing, except I'm cleaning up
TestLoader at the same time to make it usable for non-trial cases as well.
After looking at
pkgutil, I am on the fence right now about backporting
ImpImporter. That might be a big help; I will grab it if I need it.
comment:8 Changed 16 years ago by
Summary of my main points, persisted from irc conversation:
- I think your module sounds like it does have useful novel functionality
- It should be implemented on top of functionality in python, or slated to be added to python, where possible.
- My default position is against unrelated-to-async-networking functionality being added twisted just cause it seems like a convenient place, but that's sway-able.
- If a python core module is untested or undocumented, the solution isn't writing a new module that does the same thing only different.
- I haven't seen anything to show that the novel functionality is actually necessary for twisted (vs. the non-novel functionality already available in pkgutil and pkg_resources)
comment:9 Changed 16 years ago by
Re: a different implementation of walk_modules that does not use import, but tries to respect path.
I don't think that's really necessary. Sensible error handling upon failure to import the subpackage is fine. For trial, that means printing a warning or a [SKIP], for other use-cases that may simply mean silently ignoring it.
comment:10 Changed 16 years ago by
Thanks for offering to review.
comment:11 Changed 16 years ago by
comment:12 Changed 16 years ago by
General
- Docstring text should begin immediately after the triple quote, not on a newline (see PEP 8, example in Coding Standard and the vast majority of docstrings in Twisted)
- File a ticket for Trial's crazy sorting behaviour
- twisted.trial.test.test_loader.PackageOrderingTest.test_sortPackagesSillyOrder failed when I ran it on the 2.3 buildbot
- The tests in test_module could be more unit-y. However, given they cover the public api pretty well, I'm ok.
twisted/python/filepath.py
- Document how _PathHelper.walk traverses the tree
- _PathHelper.segmentsFrom says it takes a FilePath. Does this actually mean FilePath or should it 'instances of _PathHelper subclasses'. If the latter, consider defining an interface
twisted/test/test_paths.py
- test_paths.AbstractFilePathTestCase.test_segmentsFromNegative docstring should say """Verify that segmentsFrom notices when the ancestor isn't an ancestor""" or something
- "appropriate introspection qualities" doesn't mean anything
twisted/trial/test/test_loader.py
- testInvalidSubdir, testValidFiles and testGetAndSet should be test_invalidSubdir, test_validFiles and test_getAndSet respectively.
- Remove commented out debug stuff in loadSortedPackages
twisted/trial/runner.py
- runner.suiteVisit should be runner.visitSuite
- remove # raise AttributeError from runner.suiteVisit
- PyUnitTestCase.id does exactly what pyunit.TestCase.id does, afaict. Remove the override.
- Write tests for ErrorHolder.visit (or else remove comment, or else file bug)
- In TestLoader, remove moduleGlob, replace with modulePrefix, and use self.modulePrefix in loadPackage
- Shouldn't _findTestClasses use t.p.modules?
twisted/python/modules.py
- t.p.modules docstring refers to 'top', which has since been replaced with methods (good move btw)
- Do we really have to import generators from the future still?
- _isPackagePath(FP('foo/bar/suckedininit.py')) returns True. It probably shouldn't.
- _ModuleIteratorHelper.iterModules docstring says 'yield nothing'. Does this mean 'not yield anything' or 'yield t.p.modules.nothing'?
- I'm a little bleary right now. What's the difference between _ModuleIteratorHelper.walkModules and _ModuleIteratorHelper.iterModules?
- docstring of getitem refers to 'pyspace', should say 'modules'
- Hurrah for iter's docstring & implementation
- PythonAttribute.init's docstring should so "do not construct me directly, use <foo>"
- Should PythonAttribute.load()'s default default be nothing?
- PythonModule.init's docstring should say "do not construct this directly".
- It's unclear what I should expect to happen if I try to load a bad module. If it's _deliberately_ unclear, then there should be a docstring somewhere
- Boy, getitem makes for some non-obvious recursion.
- I don't really understand "# feh something with importers... "
- Likewise, everything from !IPathImportMapper to _ZipMapImpl is a mystery to me.
twisted/test/test_modules.py
- test_modules has inconsistent vertical whitespace. Recommend 1 lines after methods, 2 lines after classes
- PathModificationTest docstring should say "These tests share" rather than "Test stuff that shares"
- test_modules._serialnum deserves its own comment
This API is well overdue. Thank you very much for doing the hard work of getting this stuff as right as it can possibly be.
The code looks good. I can only provide such detailed feedback because it works and is (mostly) clean.
comment:13 Changed 16 years ago by
Since there's so much here I'm going to respond to each point-by-point and tell you what I'm going to do, rather than just hope you catch it all in a re-review. I'm only mentioning stuff with open issues, anything not mentioned I will fix.
General
- Beginning docstring text immdiately after the quote breaks a few things; paragraph filling in emacs, for starters, and it makes the first ~60 characters "special" in various tools, which I don't like. It is in the coding standard though, so I'll change it for now - but I think this is probably a discussion that needs to be reopened (elsewhere) based on what pydoctor expects, or should expect.
- You take care of the sorting ticket. I don't know how it should work for real, only that it was badly broken before, and this branch just makes sure there is some sort order, not that it's the right sort order.
- I'll fix 2.5 ( ) and 2.3 ( ) test failures. Although maybe Python shouldn't change so damn much.
- I like the level of integration covered in test-module; I wrote them white-box on purpose because I want internal refactorings to be easy. I don't particularly like the internal structure of the objects right now, but the API they expose is good.
filepath
- I'll check the tests and either (A) move
segmentsFromback to FilePath if nothing's using it, or (B) document it differently otherwise.
runner
- I changed the name to suiteVisit because visitSuite means something different; it's a method of SuiteVisitor. I think I'll leave that as-is unless you have a strong objection.
_findTestClassesshould, yes, and other things (isPackage et. al. come to mind) should probably start using it too, but I was writing this in support of a particular goal - zipfile support - and I didn't make that change. I don't want the refactoring to be bigger than necessary and it is already pretty freakin' big.
ErrorHolderneeds a whole slew of tests, and they're getting out of scope. How about this: a .skipped, empty test which describes what needs to be tested, but remove the comment? And an issue, too.
modules
- "nothing" shouldn't have a non-private name, so it "yields nothing" means the normal thing, i.e. "StopIteration after no results"
iterModulesis always shallow,
walkModulesis deep. Maybe the docstrings should use those words.
- loading bad modules: it's unfortunately the same weird, undocumented error behavior as namedAny. Specifying or documenting this in a way that makes sense is just out of scope here: one potential outcome (which I actually see on a semi-regular basis) is an interpreter segfault or abort. Just don't load bad modules. I will try to address it at least a little better in the docs though, by making it sound scary :). BTW: this is one of the reasons I commented on #1276 the way that I did.
I'm glad you appreciate this :).
comment:14 Changed 16 years ago by
Cool. Assume "yes" for things I haven't replied to.
- So
nothing=>
_nothing? (I think 'yes please')
- Loading bad modules: Trial has to do this, because people are stupid and trial needs to be stupid-proof. Definitely update the docstring.
- I'll file a ticket re ErrorHolder. No other action required.
- I'll file a ticket for _findTestClasses et al. No other action required.
No need to re-assign to me when done -- just merge, provided tests pass on buildbot.
comment:15 Changed 16 years ago by
comment:16 Changed 16 years ago by
comment:17 Changed 16 years ago by
OK, I'm about to merge.
Thanks for filing tickets.
You might want to give this one last look since the satisfy-the-review diff was bigger than I thought it would be; there's a lot of stuff bridging the 2.3->2.4->2.5 triple that we now support.
The tests seem to be passing on 2.3 and 2.4 for me, but 2.5 I had to build myself, which is uncovering other issues (I will file a ticket on that in a little bit).
If I remove 'id' and the override, the doctest test starts failing again. I am deeming that an involved enough issue to get its own branch at this point.
IPathImportMapper is just the way that we decide what the string entry on sys.path means, based on the PEP 302 importer protocol. Given that the importer protocol is in flux, it's an internal detail and not terribly important to understand :).
As far as vertical whitespace; you were right that it's inconsistent, but i'm going for 2 lines between methods, 3 between classes, because I want to reserve single blank lines for discretionary separators between sections of a method.
comment:18 Changed 16 years ago by
Looks good. Merge away.
comment:19 Changed 16 years ago by
(In [17651]) Make it possible to run trial on tests located within a zip file.
Author: glyph
Reviewer: jml
This change also introduces a major new piece of functionality, twisted.python.modules, as well as twisted.python.zippath, which allows for FilePath-like access to zipfiles.
As part of changing trial to support zipfiles, another issue was discovered and fixed, which is that trial's sort support was previously broken. This is now fixed, and trial will, by default, sort tests alphabetically now, by fully qualified name.
comment:20 Changed 16 years ago by
comment:21 Changed 16 years ago by
I'm still _quite_ skeptical about the use for this in twisted.
As far as the functionality trial needs is concerned, this module has pretty much exactly duplicated the functions pkgutil provides. What is the point in that?
comment:22 Changed 16 years ago by
Having a pkgutil-based implementation in a branch (apart from a complete Twisted repo reorganization) would make it easier to compare these two solutions.
comment:23 Changed 16 years ago by
comment:24 Changed 16 years ago by
"What is the point in that?"
pkgutil, while ostensibly eventually the right way to do this, is:
- untested
- undocumented
- buggy
- changing
- only available in a beta version of python
This is a core piece of functionality that trial depends on intimately, and that other functionality (the plugin system) also implements. (see #1951).
We have backported Python functionality in the past, but never from beta versions of Python, and we explicitly do not make local modifications to it, or write our own test suites.
And personally, after evaluating them endlessly while I was considering your objections to this branch, I have to say that even if it were documented, tested, stable, and available everywhere, I find pkgutil's API is still gross. It yields strings with no metadata aside from an 'importer' (an API which is also still in flux), and its API doesn't extend into non-module python objects, such as classes and methods and attributes. With twisted.python.modules, twisted (trial plus plugin) can eventually use a unified API for all its discovery; with pkgutil, it has to use walk_packages for some stuff, inspect for other stuff, and pkg_resources for yet other stuff, passing strings between the three and sometimes cutting them up for various reasons.
I think that the right thing to do here is, once python2.5 has been out for a little while, propose modules, zippath, zipstream and filepath for inclusion in 2.6. Perhaps as part of doing so, we will re-implement several things in terms of different APIs, but I think the public API they're presenting now is about right.
It looks to me like this proposal essentially duplicates:
Or, if not duplicates, at least *should use* as its underlying API.
In particular, you can write a "walk all the modules under this module" on top of listdir and isdir.
|
https://twistedmatrix.com/trac/ticket/1940
|
CC-MAIN-2022-27
|
refinedweb
| 4,026
| 62.27
|
The following instructions will help you to install your template.
1.
Template Installation
Installing through the Joomla! CMS Extensions Manager:
— In the administration section of your Joomla! web-site go to the Extensions → Extensions
Manager.
— Then in the “Upload package file” section open the Package File upload dialogue box by
clicking on the “Browse” button.
— Find the theme .zip archive from the “joomla” folder in your downloaded template package, click
Open and then click “Upload & install”.
Manual installation
— Open your downloaded template package and go to the “joomla” folder.
— There you’ll see the “theme###.zip” package. Extract the files from the package.
— Uploaded the extracted “theme###” folder to the “joomla/templates” folder on your server.
— Then open the Joomla! Administration panel, go to “Extensions → Extensions Manager →
Discover”, then click the “Discover” button in the top right corner.
— Below you should see the manually installed template. Select it and click “Install” button.
2. Template Activation
When you are done installing the template you need to activate it. Open the Joomla! administration
section, then using the main menu please, go to the “Extensions → Template manager” page.
Then choose the newly installed template by clicking on the checkbox and click “Make Default”
button in the upper right corner. This will enable your template as a default one for your site.
Then click on the template name to enter the template configuration page. You need to assign the
template to the website pages. To do this click “Toggle Selection” button in the “Menus assignment”
section.
3. Advanced Options
Each template has the advanced options that allow you to modify some template features directly
through the admin panel. At the template configuration page take a look at the “Advanced Options” section
in the right column.
There you can upload your custom logo, insert the website title and description etc.
4. Modules Configuration
Our templates are supplied with the sample pages, modules and configuration settings. In other words
to make the template look as at the preview you need to install and configure the modules.
There are two ways to configure the template modules:
a) by importing .sql file into the Joomla! database;
b) manually configuring template modules through the Joomla! administration panel.
Importing dump.sql file into your joomla database
ATTENTION:
import .sql file ONLY if your site is a clean installation of Joomla! CMS, and you have
not yet created any content, articles, modules, etc. This step will erase all the work you’ve
already done to your site. Otherwise, skip this step and read the next one - “Manually configuring
template modules”
To install dump.sql file you should do the following:
— Open your PHPMyAdmin tool (which is part of your Hosting Control Panel).
— Select the database you created for the Joomla! CMS.
— Select “IMPORT” option and click “Browse” button
— Navigate to the “sources” folder of the template and select dump.sql file.
— Press “GO”
button. When the import is done, you need to complete only one more step,
described in this document - “
6. Images Preview
”
If you don’t have much experience with PHPMyAdmin tool, please check this video tutorial on how
to complete the steps above:
Manually configuring template modules
a) In the Joomla! administration section please, go to “Extensions → Module Manager”;
b) Open each module from the list below, assign it to the given position and set the configuration
options from the following table:
Name Type Position
Class
Suffix
Show
Title
Order Additional info
Social Custom HTML header-right no 1
You can find the source
code in
sources/social.html file
Search Search header-right
no 1
Main menu Menu Top no 1
Call Center Custom HTML Top no 2
You can find the
source code in
sources/call_center.h
tml file
Slider Banners Slider
no 1
Top Blocks
Articles
–
Newsflash
top-blocks no 1
Outsourcing Custom HTML top-blocks
yes 2
You can find the
source code in
sources/outsourcing.
html file
What we do Custom HTML right yes 1
You can find the
source code in
sources/what_we_do
.html file
Newsletter
Acymailing
Starter
right
yes 2
Sources/
com_acymailing_star
ter_2.0.0_2011-08-
06.zip
Footer Footer footer no 1
c) After all the necessary changes are applied save the values of each module by clicking the “Save
& Close” button in the upper right corner.
To make your Joomla! look like our live demo please, install the database dump that is located in
sources\dump.sql.
ATTENTION:
By default Joomla! 1.6 installation comes with only the Main Menu module installed.
Therefore to make your Joomla! site look like our live demo please, install all necessary modules
through the administration section of your site. You may also need to add all the menu items and
articles by yourself.
5. Images preview
To display the same images as on the preview page of your template, please, go to the “sources”
folder from the template package and copy the “images” folder to the Joomla! installation root folder on your
server.
6. How to add new Custom HTML modules to your template
Some templates may contain the custom HTML modules that should be installed manually.
Note: If you performed the dump.sql file import you may skip this step.
Please, enter the Joomla! administration panel, and go to the Site → Global Configuration. Find the
option “Default Editor” and select “No Editor”. Default Joomla! WYSIWYG editors may modify the custom
HTML module’s code and cause you some issues.
— Then go to the Extensions → Module Manager page. Press the "New" button in the top right
corner:
— select "Custom HTML";
— in the "Title" field type the title of your new module;
— in the "Position" field choose the correct position for the new module from the Modules
Configuration table above (Step 4);
— in the "Module Class Suffix" field please, type the appropriate class suffix specified in the
Modules Configuration table;
— in the "Custom Output" text area please, put the HTML code provided in the appropriate file
from the template “sources” folder. The correct filename could be found in the Modules Configuration table.
After everything is done please, Press the "Save & Close" button in the top right corner, and check
your website.
7. How to install third party modules
The template may have some modules that are not included into the default Joomla! installation and
should be installed manually. The procedure is the same as described in the step 1 of this document.
— From the Joomla! administration panel select “Extensions → Extensions Manager”.
— Click the Browse button and select the extension package from the template “sources” folder.
— Click the “Upload & Install” button.
— Then go to the “Extensions → Module manager” page. The newly installed module should
appear in the modules list.
— Open the module and set the configuration options according to the Modules Configuration table.
Congratulations, you are done with the template installation. Now you may go to your site
and continue with
|
https://www.techylib.com/el/view/engineachoo/the_following_instructions_will_help_you_to_install_your_template
|
CC-MAIN-2017-51
|
refinedweb
| 1,146
| 57.27
|
#include <wx/event.h>
This event class contains information about navigation events, generated by navigation keys such as tab and page down.
This event is mainly used by wxWidgets implementations. A wxNavigationKeyEvent handler is automatically provided by wxWidgets when you enable keyboard navigation inside a window by inheriting it from wxNavigationEnabled<>.
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros:
Flags which can be used with wxNavigationKeyEvent.
Returns the child that has the focus, or NULL.
Returns true if the navigation was in the forward direction.
Returns true if the navigation event was from a tab key.
This is required for proper navigation over radio buttons.
Returns true if the navigation event represents a window change (for example, from Ctrl-Page Down in a notebook).
Sets the current focus window member.
Sets the direction to forward if direction is true, or backward if false.
Sets the flags for this event.
The flags can be a combination of the wxNavigationKeyEvent::wxNavigationKeyEventFlags values.
Marks the navigation event as from a tab key.
Marks the event as a window change event.
|
https://docs.wxwidgets.org/3.1.3/classwx_navigation_key_event.html
|
CC-MAIN-2021-43
|
refinedweb
| 187
| 50.63
|
go to bug id or search bugs for
Description:
------------
Trying to create a RecursiveArrayIterator class,
using currently available SPL classes.
Its generating this error ->
php in free(): warning: chunk is already free
Apparently it works on win2k ->
Reproduce code:
---------------
<?php
$array = array(0 => array('world'));
class RecursiveArrayIterator extends ArrayIterator implements RecursiveIterator {
function hasChildren() {
return (is_array($this->current()));
}
function getChildren() {
return new self($this->current());
}
}
$it = new RecursiveIteratorIterator(new RecursiveArrayIterator($array));
foreach($it as $key => $val) {
var_dump($key, $val);
}
?>
Expected result:
----------------
$ php -f z.php
int(0)
string(5) "world"
Actual result:
--------------
$ php -f z.php
int(0)
string(5) "world"
php in free(): warning: chunk is already free
Add a Patch
Add a Pull Request
Please do the following:
php run-tests.php ext/spl
and copy the output here
if you have also this:
memcheck php ext/spl/tests/bug31926.php
but just report if it states any real errors besides the messages stating that some memory is still accessible at program end.
For me the problem is solved both in HEAD and 5.0.4-dev but i don't use FreeBSD.
Please try using this CVS snapshot:
For Windows:
I can't reproduce this. Are you sure it's just something wrong with your system? Can you reproduce it on other machines?
I can reproduce it on 2 separate machines running FreeBSD 5.3.
I also posted an email to FreeBSD's ports mailing list around the time when I filed this bug report, and someone wrote back saying that they can reproduce it on their machine running FreeBSD 5.3.
Unfortunately for me, spare time is becoming scarce, and I wouldn't be able to keep up with this bug report.
So if this is something that cannot be reproduced by php dev, then feel free to mark this report "closed" or "no feedback", or whatever is appropriate.
Maybe someone else who has time, that is running
FreeBSD 5.3 can pick this up.
Thank you.
Can not reproduce.
|
https://bugs.php.net/bug.php?id=31926
|
CC-MAIN-2017-51
|
refinedweb
| 333
| 56.25
|
Customizing the 2007 Office Fluent Ribbon for Developers (Part 3 of 3)
Summary: This article is the companion to the part one and part two articles of the same name. This article contains a list of frequently asked questions regarding the Microsoft Office Fluent user interface. (33 printed pages)
Frank Rice, Microsoft Corporation
Ken Getz, MCW Technologies, LLC
Published: May 2006
Updated: May 2008
Applies to: Microsoft Office Access 2007, Microsoft Office Excel 2007, Microsoft Office PowerPoint 2007, Microsoft Office Outlook 2007, Microsoft Office Word 2007, Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office System, Microsoft Visual Studio 2005
Contents
The Ribbon and the Office Fluent User Interface
Ribbon and Office Fluent UI Frequently Asked Questions
Conclusion
Additional Resources
The Ribbon and the Office Fluent User Interface
The Ribbon is a part of the new Microsoft Office Fluent user interface (UI) for some of the applications in the 2007 release of Microsoft Office, including Microsoft Office Access 2007, Microsoft Office Excel 2007, Microsoft Office Outlook 2007, and Microsoft Office Word 2007. The Office Fluent UI is a departure from the system of menus, toolbars, and dialog boxes that were part of earlier releases of Office.
Ribbon and Office Fluent UI Frequently Asked Questions
These are some of the questions that are asked most frequently about how to customize the Fluent UI.
When I moved to Beta 2 TR, I noticed some differences in element names. What are these, specifically?
The <advanced> element was renamed to <dialogBoxLauncher>. The <fileMenu><menu id="fileMenu"> element was renamed to <officeMenu>. A loadImage callback was added with the following signatures.
Sub LoadImage(imageID, ByRef image)
public object loadImage(string imageId)
In addition, many of the idMso values changed.
How do you expose the internal IDs of built-in controls?
You can see each individual idMso value within the applications by using the following procedure.
To find the idMso value for an application
Click the Microsoft Office Button, and then click Application Options.
Click Customize, and then select the item you want information about.
Move the pointer over the item. The dialog box displays the control's idMso value in a ScreenTip, in parentheses.
What are the control IDs for controls that I need to disable or repurpose?
There are a set of Ribbon controls whose published control IDs are not the same as control IDs that should be used for their disabling/repurposing. This is due to how these specific controls are implemented internally. These previously unpublished control IDs only apply to the <commands> section of the XML used to define the Ribbon when repurposing and disabling commands. For all other scenarios (inserting controls with insertAfterMso/insertBeforeMso, cloning controls with <control idMso=""/>, re-using images with imageMso, and so forth), the published control ID is the correct ID.
The following table lists the affected IDs–the second column is the published control ID, and the third column is the ID that should be used to disable or repurpose the controls.
Table 1. Control IDs for disabling and repurposing controls
What are some of the limitations on attributes that I need to know about?
The getShowImage, getShowlabel, showImage, showLabel attributes are ignored on large controls.
The description and getDescription attributes only apply to menu items.
The getSize and size attributes do not apply to menu items. Instead the size is based on the item size for menu items.
The getVisible and visible attributes are ignored for ButtonGroup and Separator elements.
How do I display error messages for the Fluent UI?
You can control the error message display by setting a general option in each application.
To display error messages for the Fluent UI
Click the Microsoft Office Button, and then click Application Options to display the dialog box.
Click Advanced, and then find the General section of the options.
Select Show add-in user interface errors.
How do I change the UI dynamically? For example, I want to update labels and images, hide and show buttons, or refresh the content of a list while my code is running.
See the section "Dynamically Updating the Fluent UI" in the article Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3).
In Excel 2007, I am not able to run macros from the Quick Access Toolbar or from the Ribbon when my worksheet is in Print Preview. Is this the expected behavior?
Yes, this is the expected behavior. The ability to run macros while in Print Preview is disabled in Excel 2007. For example, assume that you have added a custom button to the Quick Access Toolbar. If you click the Microsoft Office Button, point to Print, and then click Print Preview, the current worksheet is displayed in Print Preview mode. The default buttons on the Quick Access Toolbar are disabled. If you then click the custom button, nothing happens; that is, any macro attached to the button is not executed and no dialog box is displayed.
Is there a way to programmatically control how the UI used by my add-in scales in size as controls are added or removed?
As currently implemented, custom groups do not resize themselves. They remain large, effectively getting a higher priority.
Is there a way to reset the UI and remove all customizations?
Yes. To reset the UI, uninstall your add-ins and then close any open documents. This restores the default UI.
Can I dynamically change the number of results in a Gallery control?
Yes. You can dynamically fill the gallery by supplying callbacks for the getItemCount, getItemLabel, or getItemImage attributes.
Are custom controls in the Fluent UI supported?
No. As an alternative, for scenarios that are not supported by the built-in controls, you can use the custom task pane feature to host your custom controls. You can find more information in the article Creating Custom Task Panes in the 2007 Office System.
Are all the controls in the Office applications available to my own customizations?
No, some controls are not available. For example, the splitButtonGallery control is not available to your customizations. (An example of the splitButtonGallery control is the Text Highlight Color control in Word 2007.)
What parts of the Fluent UI are not customizable by using the new extensibility model?
You cannot customize the status bar, the Mini toolbar, or context menus, although you can customize context menus by using the command bars object model.
Can I turn off the Mini toolbar?
Yes. The following procedure gives the steps.
To turn off the Mini toolbar
Click the Microsoft Office Button, and then click Application Options to display the Options dialog box.
Click Popular.
Clear the Show Mini Toolbar on selection option.
My Microsoft Office Access 2003 solution hides all Access menus and toolbars and displays custom menus and toolbars. What happens when users open this solution in Access 2007? Will my custom menus and toolbars appear on the Add-Ins tab?
Access 2007 can detect when an Access 2003 application includes settings to hide menus and toolbars, and to display only custom menus and toolbars. In this case, Access 2007 does not display the custom menus and toolbars on the Add-Ins tab.
How does attached Fluent UI customization XML work in Access 2007? Can I store the custom UI in the database? If so, how?
Because Access databases do not implement the new Office Open XML Formats file structure, Microsoft Visual Basic for Applications (VBA) solutions in Access usually store their markup in a table in the database. Create a table named USysRibbons and store two columns (RibbonName, a 255-character field, and RibbonXml, a memo field) that contain names and markup. You can then select a Ribbon by name from the table, by using the Options dialog box. You can also use standard data manipulation techniques to read XML content from a table, and call the Application.LoadCustomUI method to apply the new Ribbon content. You can find more information on the Office Fluent User Interface Developer Portal Web site.
What happens when two add-ins try to repurpose the same built-in control?
The last add-in that attempts to repurpose the control becomes the active add-in.
Can I programmatically remove items from built-in galleries?
You cannot programmatically remove items from built-in galleries by using extensibility. You may be able to remove them by using the application's object model.
Can I programmatically customize the Quick Access Toolbar, at least in a start-from-scratch scenario?
Yes. You can customize the Quick Access Toolbar by setting the startFromScratch attribute of the Ribbon element to true. However, we recommend not customizing the Quick Access Toolbar unless there is a good business reason—this feature is really meant for user customization.
How do I localize my UI?
You have two options. If you use COM, you can return different XML files, based on the current UI language. If you use VBA, you can have multiple VBA files for each language, or you can have a callback that returns the appropriate label for all of your controls.
Can I remove the Microsoft Office Button?
You can disable or hide all of the items on the Microsoft Office Button menu, but you cannot remove the button itself.
How do I write a VBA add-in that uses the Fluent UI, but that uses command bars in Office 2003 applications?
You can create one VBA document that uses the functionality of both Office 2003 and the 2007 Microsoft Office system. One way to do this is to check the version of Office, by using the Application.Version property. If the value is less than "12" (for 2007 Office applications), run your command bars code. Your Fluent UI XML markup is ignored by the converter that enables a document created in a 2007 Office application to be opened in an Office 2003 application. If the value is "12", you do not need to do any special processing. The file that contains your Fluent UI XML markup is loaded from the Office Open XML Formats file, and your callbacks are made available.
I cannot use extensibility to control the status bar. How do I programmatically hide the status bar?
You can hide the status bar by using the following line of code.
Application.CommandBars("Status Bar").Visible = False
How do I create two add-ins that add items to the same group or tab?
The idQ property of controls exists to enable multiple add-ins to share containers, such as custom tabs and groups.
In the following VBA example, two Excel add-ins share the same "Contoso" group on the add-ins tab; each adds one button to it. The key is specifying the same unique namespace in the <customUI> tag. Then, controls can reference this namespace by using idQ.
CustomUI for add-in 1
<customUI xmlns="" xmlns: <ribbon> <tabs> <tab idMso="TabAddIns"> <group idQ="x:Contoso" label="Contoso"> <button id="C1" label="Contoso Button 1" size="large" imageMso="FileSave" onAction="c_action1" /> </group> </tab> </tabs> </ribbon> </customUI>
CustomUI for add-in 2
<customUI xmlns="" xmlns: <ribbon> <tabs> <tab idMso="TabAddIns"> <group idQ="x:Contoso" label="Contoso"> <button id="C2" label="Contoso Button 2" size="large" imageMso="FileSave" onAction="c_action2" /> </group> </tab> </tabs> </ribbon> </customUI>
If you use a COM add-in to customize the Fluent UI, the namespace name must be the ProgID of the COM add-in, but the behavior is otherwise the same. When you use a shared add-in, the ProgID is AddInName.Connect. When you use Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office System (Visual Studio 2005 Tools for Office Second Edition) to create the add-in, the ProgID is the name of the add-in.
How can I assign KeyTips to my controls?
KeyTips are the keyboard shortcuts that appear on the Ribbon when you press the ALT key. You can assign your own KeyTips by using the keytip and getKeytip attributes. (The getKeytip attribute supplies the name of a callback procedure that provides the KeyTip.)
(VBA only) If two documents have the same callback signatures, the callbacks from the active document are called. How do I ensure my UI calls only the callbacks associated with my document?
This is an issue that was also present in Office 2003. As a workaround, you can make your callback names unique by adding your add-in or solution name to the callback name. You can also put your callbacks in a module, and then refer to your callbacks by using the full name of the procedure. For example, if you put your callbacks in a module named "MyAddInXYZ", you can refer to the callbacks by using "MyAddInXYZ.myCallback".
Can I interact with Fluent UI controls from VBA?
The Application.CommandBars class provides the following methods for interacting with Fluent UI controls.
Table 2. Methods for the Application.CommandBars class
How can I determine the Ribbon ID for Ribbons in the various applications?
The following table lists the Ribbon IDs for the different applications. Each application passes this ID to your solution in the getCustomUI method of the IRibbonExtensibility interface. This enables your application (or add-in) to determine which application has loaded your code, and you can return a different set of XML content depending on the identity of the host application.
Table 3. Ribbon IDs by application
Can I add images to my Enhanced ScreenTips?
No. You can add only text, by using the Supertip property.
How do I start a new line in an Enhanced ScreenTip?
Type the following character code where you want the new line to start:
How do I invalidate a control that has a qualified ID (idQ)?
You can call a callback procedure and pass the ID of the control in the following way.
Assume that idQ="x:test_idq"
You invoke the callback by using the following method.
InvalidateControl("test_idq")
You cannot set callbacks or invalidate controls from a different add-in (even though they are specified by using the idQ attribute in the current add-in's XML). Only the add-in that has the ProgID namespace gets callbacks and can invalidate the control.
How do I write a shim for my COM add-in?
See the information in the MSDN article Isolating Office Extensions with the COM Shim Wizard.
How do I display large menu items?
In the <menu> tag in the Fluent UI XML file, set itemSize="large". For any element that supports the itemSize attribute, set the value to large to cause the item to appear large (set the value to normal for normal-sized items).
Can I have two callbacks with the same name but different signatures?
Although you can do this, we recommended that you have different callbacks for each control (and not count on built-in overloading to handle the distinction between the two callbacks). For example, assume that you write a Fluent UI add-in with two callbacks of the same name, as in the following code.
public void doSomething(IRibbonControl control, bool pressState); public void doSomething(IRibbonControl control);
Also assume that your XML markup defines a toggleButton control and a button control, and that each of them has an onAction="doSomething" callback.
In this instance, only the toggleButton control will work, because of the Visual Basic and Visual C# auto-generated IDispatch implementation. If you write a C++ add-in and implement IDispatch yourself, this case will work. (In other words, it is best not to do this.)
How can I determine the correct signatures for each callback procedure?
The following table lists all of the callbacks, along with their procedure signatures for C++, VBA, C#, and Visual Basic.
Table 4. List of all C#, VBA, C++, and Visual Basic callbacks and signatures
How do I find out what each Ribbon attribute indicates?
The following table lists all of the Ribbon attributes and includes a short description of each.
Table 5. Ribbon attributes
I am looking for guidance about how to create a consistent end-user experience when customizing the Fluent UI directly with XML files or through add-ins. Can you help?
You can find the 2007 Office system guidance document UI Style Guide for Solutions and Add-Ins on the Microsoft Download Center.
Is it possible to line up (either right-justify or left-justify) text boxes in my custom Fluent UI?
No. However, you might be able to get a similar effect by using the box control. The box control is a container for other controls that has a boxStyle attribute that can be set to horizontal or vertical.
I have a document that I created from a template containing several macros. I have tried calling the macros from the Ribbon onAction callbacks without success. How can I call existing macros from Ribbon controls without modifying the original macros?
It is not possible to call macros that were created for an earlier version of Office directly from a Ribbon control without modifying the macros to include a reference to the control. However, there is a workaround. You can create a new module that contains a macro that hosts all of the Ribbon callbacks. When a call is made to the new macro from a Ribbon control, the older macro is called. The following code shows an example.
New Ribbon extensibility module
Sub RibbonX_macroname(control as IRibbonControl) Select Case control button1 macroname1 button2 macroname2 End Select End Sub
How do I get the selected index or item ID for a combo box control?
The onChange callback returns the selected string. The following code shows the signature.
Sub OnChange(control as IRibbonControl, text as String)
Whenever the value of the combo box is selected, the onChange callback receives the text. However, it is not possible to get the index of the selection.
Is it possible to predict or control the order in which callbacks are called?
No. You should not add logic to your Fluent UI solutions that depends on callbacks being called in a certain order.
In an application that uses command-bar controls, the Tag property was useful for storing arbitrary strings. How can I use the IRibbonControl.Tag property in my Fluent UI solutions?
The 2007 Microsoft Office applications do not use the Tag property, so you can use it to store arbitrary strings and then retrieve them at run time. In your XML, you can set the tag as in the following code.
<button id="mybutton" tag="some string" onAction="MyFunction"/>
When MyFunction is called, you can get the IRibbonControl.Tag property, which will be "some string".
Normally, you can distinguish between your controls by using the IRibbonControl.Id property, but there are restrictions on what IDs can contain (no non-alphanumeric characters, and they must all be unique). The Tag property does not have these restrictions, so it can be used in the following situations, where the Id property does not work:
If you need to store a special string with your control, such as a file name, as in this example: tag="C:\path\to\my\file.xlsm"
If you want multiple controls to be treated the same way by your callbacks, but you do not want to have a list of all of their IDs (which have to be unique). For example, you could have buttons on different tabs all with tag="blue", and then just check the Tag property instead of the ID for some action in the callback.
Is it possible to display an image in a ScreenTip or Enhanced ScreenTip similar to the Chart button in the Illustrations group on the Insert tab?
No. This is not currently supported in Fluent UI extensibility.
Assume I have a custom Ribbon defined for Outlook 2007 and a different Ribbon defined for Word 2007. If I use Word for my e-mail editor, which Ribbon will I see when I create or edit an e-mail message?
When a new Inspector type is created, Outlook will call the GetCustomUI method and pass in the Ribbon ID as an argument. Even though Outlook uses Word APIs, it is still an Outlook container and uses the Outlook Ribbon.
Conclusion
The articles that make up this set provide you with the information that you need to produce professional-looking solutions that are tailored to the needs of your customers. The customization samples presented in Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3) can be used as a jumping-off point for creating a UI that places the controls and options that are most important to your customers within easy reach. The reference information described in Customizing the 2007 Office Fluent Ribbon for Developers (Part 2 of 3) gives you detailed control over the look and feel of the Fluent UI. This article answers many of the questions that might arise as you create your own customized Fluent UI. By applying the information presented in these articles to your own applications, you can create more innovative, attractive solutions that set you apart from your competition.
|
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/aa722523(v=office.12)
|
CC-MAIN-2019-13
|
refinedweb
| 3,501
| 54.02
|
"Serge E. Hallyn" <serue@us.ibm.com> writes:> Quoting Eric W. Biederman (ebiederm@xmission.com):>> This whole debate on network devices show up in multiple network namespaces>> is just silly. The only reason for wanting that appears to be better> management.>> A damned good reason. Better management is a good reason. But constructing the management in a way that hampers the implementation and confuses existing applications isa problem.Things are much easier if namespaces are completely independent.Among other things the semantics are clear and obvious.> Clearly we want the parent namespace to be able> to control what the child can do. So whatever interface a child gets,> the parent should be able to somehow address. Simple iptables rules> controlling traffic between it's own netdevice and the one it hands it's> children seem a good option.That or we setup the child and then drop CAP_NET_ADMIN.>> We have deeper issues like can we do a reasonable implementation without a>> network device showing up in multiple namespaces.>> Isn't that the same issue?I guess I was thinking from the performance and cleanliness point ofview.>> If we can get layer 2 level isolation working without measurable overhead>> with one namespace per device it may be worth revisiting things. Until>> then it is a side issue at best.>> Ok, and in the meantime we can all use the network part of the bsdjail> lsm? :)If necessary. But mostly we concentrate on the fundamentals and figureout what it takes to take the level 2 stuff working.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2006/6/30/168
|
CC-MAIN-2017-43
|
refinedweb
| 284
| 58.58
|
table of contents
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.07-1
- unstable 5.08-1
NAME¶pthread_kill - send a signal to a thread
SYNOPSIS¶
#include <signal.h>
int pthread_kill(pthread_t thread, int sig);
Compile and link with -pthread.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
pthread_kill():
DESCRIPTION¶The pthread_kill() function sends the signal sig to thread, a thread in the same process as the caller. The signal is asynchronously directed to thread.
If sig is 0, then no signal is sent, but error checking is still performed.
RETURN VALUE¶On success, pthread_kill() returns 0; on error, it returns an error number, and no signal is sent.
ERRORS¶
- EINVAL
- An invalid signal was specified.
ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶POSIX.1-2001, POSIX.1-2008.
NOTES¶Signal.
|
https://manpages.debian.org/unstable/manpages-dev/pthread_kill.3.en.html
|
CC-MAIN-2020-40
|
refinedweb
| 146
| 69.68
|
Now that you have seen some examples of Java programming, this section provides an overview of Java. Previous examples show that Java can be used to create both standalone applications and applets that can be embedded in an HTML document. As you learn the features of Java, you'll notice that Java has everything you need from a general-purpose programming language. Although Java has been associated mostly with the creation of applets to be embedded in Web pages, many people have come to realize that Java is also ideal for developing general-purpose applications. Nowadays, Java is also extensively used for server-side programs such as Java servlets that work with Web servers.
One of the reasons for Java's popularity is the portability of Java code. The compiled Java code runs on any system for which a Java interpreter and the Java libraries are available. It's not trivial to build the Java libraries, but once the Java interpreter and libraries are available on an operating system, you can run most Java applications unchanged on that system.
Java is an object-oriented language. So much so that Java does not allow any standalone procedures at all; all data and procedures must be inside an object. In fact, the entire Java application is an object as well.
The basic concepts of object-based programming are the same as those of other object-oriented programming languages such as Smalltalk or C++. There are three underlying concepts:
Data abstraction
Inheritance
Polymorphism
To understand data abstraction, consider the file input/output (I/O) routines in the C run-time library. These routines allow you to view the file as a stream of bytes and allow you to perform various operations on this stream by calling the file I/O routines. For example, you can call fopen to open a file, fclose to close it, fgetc to read a character from it, and fputc to write a character to it. This abstract model of a file is implemented by defining a data type named FILE to hold all relevant information about a file. The C constructs struct and typedef are used to define FILE. You can think of this definition of FILE, together with the functions that operate on it, as a new data type just like C's int or char.
To use the FILE data type in C, you do not have to know the data structure that defines it. In fact, the underlying data structure of FILE can vary from one system to another. Yet, the C file I/O routines work in the same manner on all systems. This is possible because you never access the members of the FILE data structure directly. Instead you rely on functions and macros that essentially hide the inner details of FILE. This is known as data hiding.
Data abstraction is is used to ensure that the the internal structure of the ADT can be altered without any fear of breaking the programs that call the functions that are provided for operations on that ADT. Thus, C's FILE data type is an example of an ADT (see Figure 26-2).
In object-oriented languages, you create an object from an ADT. Essentially, an ADT is a collection of variables together with the procedures necessary to operate on those variables. The variables represent the information contained in the object while the procedures define the operations that can be performed on that object. You can think of the ADT as a template from which specific instances of objects can be created as needed. The term class is often used for this template, consequently class is synonymous with an ADT. In fact, Java provides the class keyword precisely for the purpose of defining an ADT-the template from which objects are created. The ADT is a template for objects in the sense that creating an object involves setting aside a block of memory for the variables of that object.
In Java, the procedures that operate on an object are known as methods. This term comes from the object-oriented language Smalltalk. The methods define the behavior of an object. Another common term of object-oriented programming also originated in Smalltalk-the idea of sending messages to an object causing it to perform an operation by invoking one of the methods. In Java, you do this is by calling the appropriate method of the object.
In Java, you use the class keyword to define the blueprint of an object. That means a class defines the data and methods that constitute a type of object. Then, you can create instances of the object by using the new keyword. For example, you might create an instance of a Circle class as follows:
Circle c1 = new Circle(100.0, 60.0, 50.0);
This example invokes the Circle class constructor that accepts three floating-point numbers as argument (presumably, the coordinates of the center and the radius of the circle). You'll see a complete example of classes in the 'Geometric Shapes in Java' section, later in this chapter.
Data abstraction does not cover an important characteristic of objects. Real-world objects do not exist in isolation. Each object is related to one or more other objects. In fact, you can often describe a new kind of object by pointing out how the new object's characteristics and behavior differ from those of a class of objects that already exists. This is what you do when you describe an object with a sentence such as: B is just like A, except that B has ... and B does ... . Here you are defining objects of type B in terms of those of type A.
This notion of defining a new object in terms of an old one is an integral part of object-oriented programming. The term inheritance is used for this concept because you can think of one class of objects as inheriting the data and behavior from another class. Inheritance imposes a hierarchical relationship among classes where a child class inherits from its parent. In Java terminology, the parent class is known as the superclass; the child is the subclass. In a Java program, you use the extends keyword to indicate that one class is a subclass of another.
In a literal sense, polymorphism means the quality of having more than one form. In the context of object-oriented programming, polymorphism refers to the fact that a single operation can have different behavior in different objects. In other words, different objects react differently to the same message. For example, consider the operation of addition. For two numbers, addition should generate the sum. In a programming language that supports object-oriented programming, you should be able to express the operation of addition by an operator, say, +. When this is possible, you can use the expression x+y to denote the sum of x and y, for many different types of x and y: integers, floating-point numbers, and, even strings (for strings the + operation means the concatenation of two strings).
Similarly, suppose a number of geometrical shapes all support a method, draw. Each object reacts to this method by displaying its shape on a display screen. Obviously, the actual mechanism for displaying the object differs from one shape to another, but all shapes perform this task in response to the same method.
Polymorphism helps by allowing you to simplify the syntax of performing the same operation on a collection of objects. For example, by exploiting polymorphism, you can compute the area of each geometrical shape in an array of shapes with a simple loop like this:
// Assume "shapes" is an array of shapes (rectangles, circles, // etc.) and "computeArea" is a function that computes the // area of a shape. for(int i=0; i < shapes.length; i++) { double area = shapes[i].computeArea(); System.out.println("Area = "+area); }
This is possible because regardless of the exact geometrical shape, each object supports the computeArea method and computes the area in a way appropriate for that shape.
For a concrete illustration of object-oriented programming in Java, consider the following example. Suppose that you want to write a computer program that handles geometric shapes such as rectangles and circles. The program should be able to draw any shape and compute its area.
The first step in implementing the geometric shapes in Java is to define the classes. I'll start with an abstract class that simply defines the behavior of the shape classes by defining the common methods. The following listing shows the abstract Shape class, which you should save in the Shape.java file:
//--------------------------------------------------------------- // File: Shape.java // // Abstract class for Shape objects abstract public class Shape { abstract public void draw(); abstract public double computeArea(); }
As this listing shows, you use the abstract keyword to indicate that a class cannot be instantiated because it does not implement all the methods.
After you define the abstract Shape class, you can create concrete versions of Shape classes such as a Circle and Rectangle. Here is a simple implementation of the Circle class that you should save in a file named Circle.java:
//--------------------------------------------------------------- // File: Circle.java // // Circle class public class Circle extends Shape { double x, y; double radius; public Circle(double x, double y, double r) { this.x = x; this.y = y; this.radius = r; } public double computeArea() { return Math.PI * radius * radius; } public void draw() { System.out.println("Circle of radius "+radius+ " at ("+x+", "+y+")"); } }
As this definition of the Circle class shows, this shape class implements the computeArea and draw methods that were declared as abstract in the Shape class. The Circle class also defines three double variables-x, y, and radius-that denote the coordinates of the center and the radius of the circle. Such variables are referred to as instance variables of the class.
The extends keyword in the Circle class definition indicates that Circle is a subclass of the Shape class. The public qualifier specifies that the Circle class is accessible from other packages (in Java, a package is a collection of classes).
Following the pattern of the Circle class, I defined the Rectangle class as shown in the following listing:
//--------------------------------------------------------------- // File: Rectangle.java // // Rectangle class public class Rectangle extends Shape { double x1, y1; double x2, y2; public Rectangle(double x1, double y1, double x2, double y2) { this.x1 = x1; this.y1 = y1; this.x2 = x2; this.y2 = y2; } public double computeArea() { return Math.abs((x1-x2)*(y1-y2)); } public void draw() { System.out.println("Rectangle with corners "+ "("+x1+", "+y1+") and "+ "("+x2+", "+y2+")"); } }
Just as the Circle class is saved in the Circle.java file, the Rectangle class should be saved in the Rectangle.java file-at least that's how Sun's Java 2 SDK expects the class definitions.
Now that I have defined the Circle and Rectangle classes, it's time to test them with a simple program. In Java, the program itself must be another class, and for a standalone Java program, the class must include a public static void method called main (this is akin to the main function in a C program).
The following listing shows the MakeShape class (in a file named MakeShape.java) that includes a main method to test the geometric shapes:
//--------------------------------------------------------------- // File: MakeShape.java // // Java application to try out various shapes public class MakeShape { public static void main(String args[]) { Shape shapes[] = new Shape[2]; shapes[0] = new Circle(100.0, 100.0, 50.0); shapes[1] = new Rectangle(80., 40., 120., 60.); System.out.println("\n"+shapes.length+" shapes\n"); for(int i=0; i < shapes.length; i++) { double area = shapes[i].computeArea(); System.out.println("Shape #"+i); shapes[i].draw(); System.out.println("Area = "+area); System.out.println("-----------"); } } }
The main method creates an array of Shape objects and initializes the array with different shapes. Then, the computeArea and draw methods of each shape are invoked.
In this example, the MakeShape class is the Java application. To compile the program, type the following command:
javac MakeShape.java
This step takes care of compiling the MakeShape.java file and all the related classes (the Shape.java, Circle.java, and Rectangle.java files). In other words, the Java compiler (javac) acts a bit like the make utility in UNIX-javac determines the dependencies among classes and compiles all necessary classes.
To run the MakeShape program, use the Java interpreter, as follows:
java MakeShape
The following listing shows the result of running the MakeShape program in a terminal window in Red Hat Linux:
2 shapes Shape #0 Circle of radius 50.0 at (100.0, 100.0) Area = 7853.981633974483 ----------- Shape #1 Rectangle with corners (80.0, 40.0) and (120.0, 60.0) Area = 800.0 -----------
A Java program consists of one or more classes. For standalone applications, you must have a class with a public void static main method. Applets require only a subclass of the Applet class.
The Java 2 SDK expects the source files to have the same name as the class, but with a .java extension (thus, a class named Circle is defined in the file Circle.java). However, other Java development environments may have a different convention for naming files.
Within each source file, the parts of the program are laid out in the following manner:
The file starts with some comments that describe the purpose of the class and provide other pertinent information such as the name of author and revision dates. Java supports both C and C++-style comments. As in ANSI C, comments may start with a /* and end after a */ pair. Or, you may simply begin a line of comment with a pair of slashes (//). A special type of comment, known as doc comment, begins with a /** and ends with */. The doc comments can be extracted by the javadoc utility program to create online documentation for classes.
One or more import statements that allow you to use abbreviated class names. For example, if you use the java.applet.Applet class, you can refer to that class by the short name Applet provided you include the following import statement in the source file:
import java.applet.*;
Note that the import statement does not really bring in any extra code into a class; it's simply a way to save typing so you can refer to a class by a shorter name (for example, Applet instead of java.applet.Applet).
The class definition that includes instance variables and methods. All variables and methods must be inside the class definition.
Java supports the standard C data types of char, short, int, long, float, and double. Additionally, Java introduces the byte and boolean types. Unlike C, however, Java also specifies the exact size of all primitive data types (in C, the size of the int type varies from one system to another). Table 26-2 summarizes Java's primitive types.
Although most of the primitive types should be familiar to C programmers, the following C-style usage are not allowed in Java:
There is no unsigned keyword in Java; all integer values are signed.
You cannot write short int or long int (these combinations are often used in C programs).
The nonprimitive data types in Java are objects and arrays. All nonprimitive types are handled by reference-that means when you pass a nonprimitive type to a method, the address of the object is passed to the method. The primitive types, however, are passed by value-that means a copy of the data is passed to the method.
Java includes a built-in String type, and the compiler treats a String almost like a primitive type. Java's String class is a part of the java.lang package, which the Java compiler uses by default.
The Java compiler automatically converts text within double quotes ("...") into a String. Also, the + operator concatenates String objects. For example, you can write code such as the following:
String welcome = "Welcome to " + "Java."; int numchars = welcome.length(); System.out.println("There are "+numchars+ " characters in: "+welcome);
When executed, this code prints the following:
There are 16 characters in: Welcome to Java.
The String class supports a number of other methods that provide the functionality of the C's string manipulation functions (that are defined in the C header file <string.h>). Java includes two related classes for handling text strings:
String to hold a string of characters that cannot be individually manipulated (in other words, you cannot insert or replace characters in the string).
StringBuffer to represent a string of characters that may be manipulated as necessary. You can append, insert, and replace characters in a StringBuffer object.
Table 26-3 summarizes the methods of the String class.
An array is an ordered collection of one or more elements. Java supports arrays of all primitive as well as nonprimitive data types. As in C, you can declare an array variable by appending square brackets to the variable name, as follows:
byte buffer[] = new byte[256]; // Create an array for 256 byte variables. Shape shapes[] = new Shape[10]; // Create an array to hold 10 Shape objects.
Exceptions refer to unusual conditions in a program. They can be outright errors that cause the program to fail or conditions that can lead to errors. Typically, you can always detect when certain types of errors are about to occur. For instance, when indexing an array with the [] operator, you could detect if the index is beyond the range of valid values. Although you could check for such an error, it's tedious to check for errors whenever you index into an array. Java's exception-handling mechanism allows you to place the exception-handling code in one place and avoid having to check for errors all over your program.
The following code fragment illustrates the syntax of try, catch, and finally blocks in Java:
try { // Code that may generate exceptions of type AnException and // AnotherException } catch(AnException e1) { // Handle exception of type AnException } catch(AnotherException e2) { // Handle exception of type AnotherException } finally { // This block of code is always executed after exiting the try block // regardless of how the try block exits. }
Java does not support multiple inheritance, so each class can have only one superclass. However, sometimes you may want to inherit other types of behavior. For example, you may want to support a specific set of methods in a class (this is akin to saying that your class can do x, y, and z). Java provides the interface keyword that allows a class to implement a specific set of capabilities.
An interface looks like an abstract class declaration. For example, the run method is used by the Thread class (see the 'Threads' section) to start a thread of execution in a Java program. The Runnable interface specifies this capability through the following interface declaration:
public interface Runnable { public abstract void run(); }
If you implement all the methods of an interface, you can declare this fact by using the implements keyword in the class definition. For example, if a class defines the run method (which constitutes the Runnable interface), the class definition specifies this fact as follows:
import java.applet.*; public class MyApplet extends Applet implements Runnable { // Must include the run() method public void run() { // This can be the body of a thread. } // Other methods of the class // ... }
Java includes built-in support for threads. A thread refers to a single flow of control-a single sequence of Java byte codes being executed by the Java Virtual Machine. Just as an operating system can execute multiple processes (by allocating short intervals of processor time to each process in turn), the Java Virtual Machine can execute multiple threads within a single application. You can think of threads as 'multitasking within a single process.'
The best way to understand threads is to go through a simple example. The following listing shows the DigitalClock applet that uses a thread to display the current date and time:
//--------------------------------------------------------------- // File: DigitalClock.java // // Displays a digital clock using a thread. import java.util.*; import java.awt.*; import java.applet.*; public class DigitalClock extends Applet implements Runnable { private volatile Thread clockThread; int interval = 1000; // The run method is required by the Runnable interface. public void run() { Thread curThread = Thread.currentThread(); while(curThread == clockThread) { try { curThread.sleep(interval); } catch(InterruptedException e) {} // Repaint the clock. repaint(); } } public void start() { // Create and start the clock thread. clockThread = new Thread(this); clockThread.start(); } public void stop() { // Get rid of the clock thread. clockThread = null; } // The paint method displays the date and time. public void paint(Graphics gc) { Font helv = new Font("Helvetica", Font.ITALIC, 16); gc.setFont(helv); gc.drawString(new Date().toString(), 4, 20); } }
Save this listing in the DigitalClock.java file and compile the applet with the following command:
javac DigitalClock.java
Next, create an HTML document, DigitalClock.html, and embed the applet as follows:
<html> <body> <applet width=200 height=24 code=DigitalClock> A Java clock applet. </applet> </body> </html>
Now, type the following command to run the applet using the appletviewer:
appletviewer DigitalClock.html
Figure 26-3 shows the resulting appletviewer window with the output from the Clock applet.
As the applet runs, it should update the clock display every second.
After you have seen the DigitalClock applet in action, you should go through its listing to see how a thread works. When going through the DigitalClock applet's code, you should note the following:
To create a thread, you need an instance of a class that implements the Runnable interface. Typically, you would implement the Runnable interface in your applet by providing a run method. Then, you can create the Thread object by calling its constructor as follows:
myThread = Thread(this);
The Java Virtual Machine (JVM) calls the start method of the applet when the applet should run and the JVM calls the stop method when the applet should stop running. Thus, the applet's start method is a good place to create the Thread object.
After you create the Thread object, you must call that Thread object's start method (note that this is different from the start method of the applet) to start running the thread.
When started, the Thread executes the body of the run method. Thus, the run method should include code to perform whatever task you want the thread to perform.
When you want to stop the thread, set the Thread's reference to null, as follows:
myThread = null;
The Java garbage collection mechanism would clean up the Thread at an appropriate time.
|
http://etutorials.org/Linux+systems/red+hat+linux+9+professional+secrets/Part+V+Programming+Red+Hat+Linux/Chapter+26+Java+Programming/Learning+Java/
|
CC-MAIN-2016-44
|
refinedweb
| 3,753
| 55.24
|
, 24 Oct 2005, Luke Hope wrote:
> Hi,
>
> Recall perhaps that I wrote from scratch a clos java wrapper generator
> independant from jfli-abcl (of which I was unaware of at the time).
> Out of stubbornness or perhaps that since I wrote it with my
> preferences in mind, I still use it. I think it's a viable
> alternative.
Could you put it somewhere on the web so people can be directed to it,
not just to jfli-abcl?
>
> Anyway, the java interface works well to me, but I had a few
> difficulties/kludges I had to introduce. I'm sure some of this at
> least (all?) was due to my ignorance of the underlying implementation
> of your java interface.
>
> I found it really difficult to pass booleans to jstatic/jcall. I
> couldn't create/pass booleans! I especially had trouble with false:
> "nil" is treated as "null" and so it's an error to pass it as a
> primitive boolean. I ended up using the following code to coerce
> booleans:
> (if boolean (jfield-raw "java.lang.Boolean" "TRUE")
> (jfield-raw "java.lang.Boolean" "FALSE")))
> Nothing else seemed to work.
There's also (make-immediate-object nil :boolean). But both return a
Boolean, not a boolean, I think.
>
> After trying to come up with a piecemeal way of converting other
> primitives, I ended up just promoting each primitive to its grown up
> "java.lang" version before passing them to your methods. Actually,
> it was a bit of a surprise, I didn't think that passing a
> "java.lang.Float" would work when the method required just a "float"
> but it did.
Is this abcl-specific?
// P.java
public class P
{
static void pr (float f) {
System.out.println(f);
}
public static void main(String[] args)
{
pr (new Float(0.1f));
}
}
compiles and runs fine. Which surprises me, too, because the JLS 5.1.7 says
"There is no permitted conversion from any reference type to any primitive type."
But then my non-existent Java expertise is quite rusty.
|
https://sourceforge.net/p/armedbear-j/mailman/message/6262279/
|
CC-MAIN-2016-36
|
refinedweb
| 337
| 68.06
|
Details
Description
Harmony: classlib + VM revision 421248
If we try to find a non-comparable item in an empty array, RI returns -1 while Harmony throws ClassCastException.
Code to reproduce:
import java.util.*;
public class Test {
public static void main (String[] args) {
System.err.println(Arrays.binarySearch(new Object[] {}, new Object()) == -1 ? "PASSED" : "FAILED");
}
}
-------
Please see the patch with unit test for this case and fix for classlib code attached
Activity
verified on build:
svn = r447316, (Sep 18 2006), Windows/ia32/msvc 1310, debug build
Spark, patch applied at revision r447282, thanks a lot for this enhancement.
Anton, please verify that the problem is fully fixed as you expected.
I've applied this patch successfully to r443308 - it solved the problem.
This is the new patch. Would you please have a try?
Best regards
I suggest to change the component attribute to classlib, since RI's behavior makes sense here.
Refer to thread '[OLD JIRA]Deal with 780 866 1367 according to discussion'
patch
Verified by Anton and Spark.
|
https://issues.apache.org/jira/browse/HARMONY-866
|
CC-MAIN-2015-27
|
refinedweb
| 169
| 65.42
|
XAML overview
[This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation]
We introduce the XAML language and XAML concepts to the Windows Runtime app developer audience, and describe the different ways to declare objects and set attributes in XAML as it is used for creating a Windows Runtime app.
What is associate a separate code-behind file for each XAML file that can respond to events and manipulate the objects that you originally declare in XAML.
The XAML language supports interchange of sources between different tools and roles in the development process, such as exchanging XAML sources between design tools and an IDE, or between primary developers and localization developers. By using XAML as the interchange format, designer roles and developer roles can be kept separate or brought together, and designers and developers can iterate during the production of an app.
When you see them as part of your Windows Runtime app projects, XAML files are XML files with the .xaml file name extension.
Basic XAML syntax
XAML has a basic syntax that builds on XML. By definition, valid XAML must also be valid XML. But XAML also has syntax concepts that are assigned a different and more complete meaning, while still being valid in XML per the XML 1.0 specification. For example, XAML supports property element syntax, where property values can be set within elements rather than as string values in attributes or as content. To regular XML, a XAML property element is an element with a dot in its name, so it's valid to plain XML but doesn't have the same meaning.
XAML and Microsoft Visual Studio
Microsoft Visual Studio helps you to produce valid XAML syntax, both in the XAML text editor and in the more graphically oriented XAML design surface. So when you write XAML for your app using Visual Studio, don't worry too much about the syntax with each keystroke. The IDE encourages valid XAML syntax by providing autocompletion hints, showing suggestions in Microsoft IntelliSense lists and dropdowns, showing UI element libraries in the toolbox, or other techniques. If this is your first experience with XAML, it might still be useful to know the syntax rules and particularly the terminology that is sometimes used to describe the restrictions or choices when we describe XAML syntax in reference or other topics. We cover these fine points of XAML syntax in a separate topic, Basic XAML syntax guide.
XAML namespaces
In general programming, a namespace is an organizing concept that determines how identifiers for programming entities are interpreted. By using namespaces, a programming framework can separate user-declared identifiers from framework-declared identifiers, disambiguate identifiers through namespace qualifications, enforce rules for scoping names, and so on. XAML has its own XAML namespace concept that serves this purpose for the XAML language. Here's how XAML applies and extends the XML language namespace concepts:
- XAML uses the reserved XML attribute xmlns for namespace declarations. The value of the attribute is typically a Uniform Resource Identifier (URI), which is a convention inherited from XML.
- XAML uses prefixes in declarations to declare non-default namespaces, and prefix usages in elements and attributes reference that namespace.
- XAML has a concept of a default namespace, which is the namespace used when no prefix exists in a usage or declaration. The default namespace can be defined differently for each XAML programming framework.
- Namespace definitions inherit in a XAML file or construct, from parent element to child element. For example if you define a namespace in the root element of a XAML file, all elements within that file inherit that namespace definition. If an element further into the page redefines the namespace, that element's descendants inherit the new definition.
- Attributes of an element inherit the element's namespaces. It's fairly uncommon to see prefixes on XAML attributes.
A XAML file almost always declares a default XAML namespace in its root element. The default XAML namespace defines which elements you can declare without qualifying them by a prefix. For typical Windows Runtime app projects, this default namespace contains all the built-in XAML vocabulary for the Windows Runtime that's used for UI definitions: the default controls, text elements, XAML graphics and animations, databinding and styling support types, and so on. Most of the XAML you'll write for Windows Runtime apps will thus be able to avoid using XAML namespaces and prefixes when referring to common UI elements.
Here's a snippet showing a template-created Page root of the initial page for an app (showing the opening tag only, and simplified). It declares the default namespace and also the x namespace (which we'll explain next).
<Page x:
The XAML-language XAML namespace
One particular XAML namespace that is declared in nearly every Windows Runtime XAML file is the XAML-language namespace. This namespace includes elements and concepts that are defined by the XAML language, by its language specification. By convention, the XAML-language XAML namespace is mapped to the prefix "x". The default project and file templates for Windows Runtime app projects always define both the default XAML namespace (no prefix, just
xmlns=) and the XAML-language XAML namespace (prefix "x") as part of the root element.
The "x" prefix/XAML-language XAML namespace contains several programming constructs that you use often in your XAML. Here are the most common ones:
Other programming constructs in the XAML-language XAML namespace exist but are not as common. For more info, see XAML namespace (x:) language features.
Mapping custom types to XAML namespaces
One of the most powerful aspects of XAML as a language is that it's easy to extend the XAML vocabulary for your Windows Runtime apps. You can define your own custom types in your app's programming language and then reference your custom types in XAML markup. Support for extension through custom types is fundamentally built-in to how the XAML language works. Frameworks or app developers are responsible for creating the backing objects that XAML references. Neither frameworks nor the app developer are bound by specifications of what the objects in their vocabularies represent or do beyond the basic XAML syntax rules (there are some expectations of what the XAML-language XAML namespace types should do, but the Windows Runtime provides all the necessary support).
If you use XAML for types that come from libraries other than the Windows Runtime core libraries and metadata, you must declare and map a XAML namespace with a prefix. Use that prefix in element usages to reference the types that were defined in your library. You declare prefix mappings as xmlns attributes, typically in a root element along with the other XAML namespace definitions.
To make your own namespace definition that references custom types, you first specify the keyword xmlns:, then the prefix you want. The value of that attribute must contain the keyword using: as the first part of the value. The remainder of the value is a string token that references the specific code-backing namespace that contains your custom types, by name.
The prefix defines the markup token that is used to refer to that XAML namespace in the remainder of the markup in that XAML file. A colon character (:) goes between the prefix and the entity to be referenced within the XAML namespace.
For example, the attribute syntax to map a prefix
myTypes to the namespace
myCompany.myTypes is:
xmlns:myTypes="using:myCompany.myTypes", and a representative element usage is:
<myTypes:CustomButton/>
For more info on mapping XAML namespaces for custom types, including special considerations for Visual C++ component extensions (C++/CX), see XAML namespaces and namespace mapping.
Other XAML namespaces
You often see XAML files that define the prefixes "d" (for designer namespace) and "mc" (for markup compatibility). Generally these are for infrastructure support, or to enable scenarios in a design-time tool. For more info, see the "Other XAML namespaces" section of the XAML namespaces topic.
Markup extensions
Markup extensions are a XAML language concept that is often used in the Windows Runtime XAML implementation. Markup extensions often represent some kind of "shortcut" that enables a XAML file to access a value or behavior that isn't simply declaring elements based on backing types. Some markup extensions can set properties with plain strings or with additionally nested elements, with the goal of streamlining the syntax or the factoring between different XAML files.
In XAML attribute syntax, curly braces "{" and "}" indicate a XAML markup extension usage. This usage directs the XAML processing to escape from the general treatment of treating attribute values as either a literal string or a directly string-convertible value. Instead, a XAML parser calls code that provides behavior for that particular markup extension, and that code provides an alternate object or behavior result that the XAML parser needs. Markup extensions can have arguments, which follow the markup extension name and are also contained within the curly braces. Typically, an evaluated markup extension provides an object return value. During parsing, that return value is inserted into the position in the object tree where the markup extension usage was in the source XAML.
Windows Runtime XAML supports these markup extensions that are defined under the default XAML namespace and are understood by the Windows Runtime XAML parser:
- Binding: supports data binding, which defers a property value until it is interpreted under a data context that only exists at run-time. This markup extension supports a wide range of arguments. For more info, see Binding markup extension.
- StaticResource: supports referencing resource values that are defined in a ResourceDictionary. These resources can be in a different XAML file but must ultimately be findable by the XAML parser at load time. The argument of a
{StaticResource}usage identifies the key (the name) for a keyed resource in a ResourceDictionary.
- ThemeResource: similar to StaticResource but can respond to run-time theme changes. ThemeResource appears quite often in the Windows Runtime default XAML templates, because most of these templates are designed for compatibility with the user switching the theme while the app is running.
- TemplateBinding: a special case of Binding that supports control templates in XAML and their eventual usage at run time.
- RelativeSource: enables a particular form of template binding where values come from the templated parent.
- CustomResource: for advanced resource lookup scenarios.
Windows Runtime also supports the x:Null markup extension. You use this to set Nullable values to null in XAML. For example you might use this in a control template for a CheckBox, which interprets null as an indeterminate check state (triggering the "Indeterminate" visual state).
A markup extension generally return an existing instance from some other part of the object graph for the app, or defers a value to run time. Because you can use a markup extension as an attribute value, and that's the typical usage, you often see markup extensions providing values for reference-type properties that might have otherwise required a property element syntax.
For example, here's the syntax for referencing a reusable Style from a ResourceDictionary:
<Button Style="{StaticResource SearchButtonStyle}"/>. A Style is a reference type, not a simple value, so without the
{StaticResource} usage, you would've needed a
<Button.Style> property element and a
<Style> definition within it to set the FrameworkElement.Style property.
By using markup extensions, every property that is settable in XAML is potentially settable in attribute syntax. You can use attribute syntax to provide reference values for a property even if it doesn't otherwise support an attribute syntax for direct object instantiation. Or you can enable specific behavior that defers the general requirement that XAML properties be filled by value types or by newly created reference types.
To illustrate, the next XAML example element that was defined earlier as a keyed resource in a resource dictionary.
<Canvas.Resources> <Style TargetType="Border" x: <Setter Property="BorderBrush" Value="Blue"/> <Setter Property="BorderThickness" Value="5"/> </Style> </Canvas.Resources> ... <Border Style="{StaticResource PageBackground}"> ... </Border>
For reference topics that describe the available markup extensions in Windows Runtime XAML, see Windows Runtime namespace extensions or XAML namespace (x:) language features.
You can nest markup extensions. The innermost markup extension is evaluated first.
Because of markup extensions, you need special syntax for a literal "{" value in an attribute. For more info see Basic XAML syntax guide.
Events
XAML is a declarative language for objects and their properties, but it also includes a syntax for attaching event handlers to objects in the markup. The XAML event syntax can then integrate the XAML-declared events through the Windows Runtime programming model. You specify the name of the. Nearly all Windows Runtime apps are defined by both markup and code-behind sources.
Here's a simple example. The Button class supports an event named Click. You can write a handler for Click that runs code that should be invoked after the user clicks the Button. In XAML, you specify Click as an attribute on the Button. For the attribute value, provide a string that is the method name of your handler.
<Button Click="showUpdatesButton_Click">Show updates</Button>
When you compile, the compiler now expects that there will be a method named
showUpdatesButton_Click defined in the code-behind file, in the namespace declared in the XAML page's x:Class value. Also, that method must satisfy the delegate contract for the Click event. For example:
namespace App1 { public sealed partial class MainPage: Page { ... private void showUpdatesButton_Click (object sender, RoutedEventArgs e) { //your code } } }
' Namespace included at project level Public NotInheritable Class MainPage Inherits Page ... Private Sub showUpdatesButton_Click (sender As Object, e As RoutedEventArgs e) ' your code End Sub ... End Class
// .h namespace App1 { public ref class MainPage sealed { ... private: void showUpdatesButton_Click (Object^ sender, RoutedEventArgs^ e); }; }
Within a project, the XAML is written as a .xaml file, and you use the language you prefer (C#, Visual Basic, C++/CX) to write a code-behind file. When a XAML file is markup-compiled as part of a build action for the project, the location of the XAML code-behind file for each XAML page is identified by specifying a namespace and class as the x:Class attribute of the root element of the XAML page. For more info on how these mechanisms work in XAML and how they relate to the programming and application models, see Events and routed events overview.
Note For C++/CX there are two code-behind files, one is a header (.xaml.h) and the other is implementation (.xaml.cpp). The implementation references the header, and it's technically the header that represents the entry point for the code-behind connection.
Resource dictionaries
Creating a ResourceDictionary is a common task that is usually accomplished by authoring a resource dictionary as an area of a XAML page or a separate XAML file. Resource dictionaries and how to use them is a larger conceptual area that is outside the scope of this topic. For more info see ResourceDictionary and XAML resource references.
XAML and XML
The XAML language is fundamentally based on the XML language. But info, see FrameworkElement.Language. xml:base is valid in markup but parsers ignore it. xml:space is valid, but is only relevant for scenarios described in the XAML and whitespace topic. The encoding attribute is valid in XAML. Only UTF-8 and UTF-16 encodings are supported. UTF-32 encoding is not supported.
Case sensitivity in XAML
XAML is case-sensitive. This is another consequence of XAML being based on XML, which is case-sensitive. The names of XAML elements and attributes are case-sensitive. The value of an attribute is potentially case-sensitive; this depends on how the attribute value is handled for particular properties. For example, if the attribute value declares a member name of an enumeration, the built-in behavior that type-converts a member name string to return the enumeration member value is not case-sensitive. In contrast, the value of the Name property, and utility methods for working with objects based on the name that the Name property declares, treat the name string as case-sensitive.
XAML namescopes
The XAML language defines a concept of a XAML namescope. The XAML namescope concept influences how XAML processors should treat the value of x:Name or Name applied to XAML elements, particularly the scopes in which names should be relied upon to be unique identifiers. XAML namescopes are covered in more detail in a separate topic; see XAML namescopes.
The role of XAML in the development process
XAML plays several important roles in the app development process.
- XAML is the primary format for declaring an app's UI and elements in that UI, if you are programming using C#, Visual Basic or C++/CX. Typically at least one XAML file in your project represents a page metaphor in your app for the initially displayed UI. Additional XAML files might declare additional pages for navigation UI. Other XAML files can declare resources, such as templates or styles.
- You use the XAML format for declaring styles and templates applied to controls and UI for an app.
- You might use styles and templates either for templating existing controls, or if you define a control that supplies a default template as part of a control package. When you use it to define styles and templates, the relevant XAML is often declared as a discrete XAML file with a ResourceDictionary root.
- XAML is the common format for designer support of creating app UI and exchanging the UI design between different designer apps. Most notably, XAML for the app can be interchanged between different XAML design tools (or design windows within tools).
- Several other technologies also define the basic UI in XAML. In relationship to Windows Presentation Foundation (WPF) XAML and Microsoft Silverlight XAML, the XAML for Windows Runtime uses the same URI for its shared default XAML namespace. The XAML vocabulary for Windows Runtime overlaps significantly with the XAML-for-UI vocabulary also used by Silverlight and to a slightly lesser extent by WPF. Thus, XAML promotes an efficient migration pathway for UI originally defined for precursor technologies that also used XAML.
- XAML defines the visual appearance of a UI, and an associated code-behind file defines the logic. You can adjust the UI design without making changes to the logic in code-behind. XAML simplifies the workflow between designers and developers.
- Because of the richness of the visual designer and design surface support for the XAML language, XAML supports rapid UI prototyping in the early development phases.
Depending on your own role in the development process, you might not interact with XAML much. The degree to which you do interact with XAML files also depends on which development environment you are using, whether you use interactive design environment features such as toolboxes and property editors, and the scope and purpose of your Windows Runtime app. Nevertheless, it is likely that during development of the app, you will be editing a XAML file at the element level using a text or XML editor. Using this info, you can confidently edit XAML in a text or XML representation and maintain the validity of that XAML file's declarations and purpose when it is consumed by tools, markup compile operations, or the run-time phase of your Windows Runtime app.
Optimize your XAML for load performance
Here are some tips for defining UI elements in XAML using best practices for performance. Many of these tips relate to using XAML resources, but are listed here in the general XAML overview for convenience. For more info about XAML resources see ResourceDictionary and XAML resource references. For some more tips on performance, including XAML that purposely demonstrates some of the poor performance practices that you should avoid in your XAML, see Optimize loading XAML.
- If you use the same color brush often in your XAML, define a SolidColorBrush as a resource rather than using a named color as an attribute value each time.
- If you use the same resource on more than one UI page, consider defining it in Application.Resources rather than on each page. Conversely, if only one page uses a resource, don't define it in Application.Resources and instead define it only for the page that needs it. This is good both for XAML factoring while designing your app and for performance during XAML parsing.
- For resources that your app packages, check for unused resources (a resource that has a key, but there's no StaticResource reference in your app that uses it). Remove these from your XAML before you release your app.
- If you're using separate XAML files that provides design resources (MergedDictionaries), consider commenting or removing unused resources from these files. Even if you have a shared XAML starting point that you're using in more than one app or that provides common resources for all your app, it's still your app that packages the XAML resources each time, and potentially has to load them.
- Don't define UI elements you don't need for composition, and use the default control templates whenever possible (these templates have already been tested and verified for load performance).
- Use containers such as Border rather than deliberate overdraws of UI elements. Basically, don't draw the same pixel multiple times. For more info on overdraw and how to test for it, see DebugSettings.IsOverdrawHeatMapEnabled.
- Use the default items templates for ListView or GridView; these have special Presenter logic that solves performance issues when building the visual tree for large numbers of list items.
Debugging XAML
Because XAML is a markup language, some of the typical strategies for debugging within Microsoft Visual Studio are not available. For example, there is no way to set a breakpoint within a XAML file. However, there are other techniques that can help you debug issues with UI definitions or other XAML markup while you're still developing your app.
When there are problems with a XAML file, the most typical result is that some system or your app will throw a XAML parse exception. Whenever there is a XAML parse exception, the XAML loaded by the XAML parser failed to create a valid object tree. In some cases, such as when the XAML represents the first "page" of your application that is loaded as the root visual, the XAML parse exception is not recoverable.
XAML is often edited within an IDE such as Visual Studio and one of its XAML design surfaces. Visual Studio can often provide design-time validation and error checking of a XAML source as you edit it. For example it might display "squiggles" in the XAML text editor as soon as you type a bad attribute value, and you won't even have to wait for a XAML compile pass to see that something's wrong with your UI definition.
Once the app actually runs, if any XAML parse errors have gone undetected at design time, these are reported by the common language runtime (CLR) as a XamlParseException. For more info on what you might be able to do for a run-time XamlParseException, see Exception handling for Windows Runtime apps in C# or Visual Basic.
Note Apps that use C++/CX for code don't get the specific XamlParseException. But the message in the exception clarifies that the source of the error is XAML-related, and includes context info such as line numbers in a XAML file, just like XamlParseException does.
Fore more info on debugging a Windows Runtime app, see Start a debug session.
Related topics
Dependency properties overview
ResourceDictionary and XAML resource references
Windows Runtime namespace extensions
XAML namespaces and namespace mapping
XAML namespace (x:) language features
Quickstart: Creating a UI with XAML
Application resources and localization sample
|
https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh700354(v=win.10)
|
CC-MAIN-2018-05
|
refinedweb
| 3,973
| 51.99
|
Maths and maths, codes and codes, but in the real world, we have to deal with words, sentences, etc. Is there anything in C which deals with these types of things? And the answer is yes, they are called strings.
We can think of strings as an array of characters, like 'Sam' is a string.
So, 'Sam' is an array of characters 'S', 'a', 'm' and '\0'
Look at the character at the 3rd index. It represents null character. ASCII value of '\0' is 0 and that of normal 0 is 48. It the represents termination (end) of a string.
We declare string as:-
char name[ ]= "Sam";
As you can see, the syntax of a string (
char name[]) also tells us that it is an array of characters.
We can also declare a string variable using characters as follows.
char name[ ]= {'S', 'a', 'm', '\0'};
Let's see two examples to print a string without and with
for loop.
#include <stdio.h> int main() { char str[ ] = "Hello"; printf("%s\n", str); return 0; }
In the above example, we printed the whole string at once. Now we will see the same example but print individual characters using
for loop.
#include <stdio.h> int main() { char str[ ] = "Hello"; int i; for( i=0; i<6; i++) { printf("%c\n", str[i]); } return 0; }
e
l
l
o
In the first example, we printed string and for that we used
%s with
printf function. Whereas in the second example, we printed single character each time, so we used
%c with the
printf function.
Now let's see an example to input a string from user.
#include <stdio.h> int main() { char name[25]; printf("Enter your name\n"); scanf("%s", name); printf("Your name is %s\n", name); return 0; }
name
Your name is name
You must be wondering the reason for writing array size as 25 while declaring the name array. We did so because we do not know the length of the name that user will input. If in the above example we had given the array size as 2, then the string variable would not have taken the input because the size of the input is greater than the size of the array. So to be on the safe side, take the array size greater than the assumed input size.
For example, if in the above example, we input Sam Brad as the name, then the output will only be Sam because scanf considers only one word and terminates after the first word.
How to take a multi word string
We can take input and give output of a string that consists of more than one word by using gets and puts, where gets is used to take input of a string from user and puts is used to display the string.
Consider the following example.
#include <stdio.h> int main() { char name[25]; printf("Enter your name\n"); gets(name); printf("Your name is "); puts(name); return 0; }
Sam Bard
You name is Sam Bard
Pointers and String
Strings can also be declared using pointers. Let's consider an example.
#include <stdio.h> int main() { char name[ ]= "Sam"; char *p; p = name; /* for string, only this declaration will store its base address */ while( *p != '\0') { printf("%c", *p); p++; } return 0; }
In the above example, 'p' stores the address of name[0], therefore equals to '\0'.
Passing Strings to Functions
This is done in the same as we do with other arrays. The only difference is that this is an array of characters. That's it!
Let's see an example.
#include <stdio.h> void display( char ch[]) { printf("String :"); puts(ch); /* display string */ } int main() { char arr[30]; printf("Enter string\n"); gets(arr); /* input string from user */ display(arr); return 0; }
abcd
String :abcd
Predefined string functions
We can perform different kinds of string functions like joining of 2 strings, comparing one string with another or finding the length of the string. Let's have a look at the list of such functions.
These functions are defined in "string.h" header file, so we need to include this header file also in our code by writing
#include <string.h>
We will see some examples of
strlen,
strcpy,
strcat and
strcmp as these are the most commonly used.
Make sure to check articles mentioned in the further reading of this chapter to see examples of each function available in the
string.h library.
strlen(s1) calculates the length of string s1.
White space is also calculated in length of string.
#include <stdio.h> #include <string.h> int main() { char name[ ]= "Hello"; int len1, len2; len1 = strlen(name); len2 = strlen("Hello World"); printf("length of %s = %d\n", name, len1); printf("length of %s = %d\n", "Hello World", len2); return 0; }
length of Hello World = 11
strcpy(s1, s2) copies the second string s2 to the first string s1.
#include <string.h> #include <stdio.h> int main() { char s2[ ]= "Hello"; char s1[10]; strcpy(s1, s2); printf("Source string = %s\n", s2); printf("Target string = %s\n", s1); return 0; }
Target string = Hello
strcat(s1, s2) concatenates(joins) the second string s2 to the first string s1.
#include <stdio.h> #include <string.h> int main() { char s2[ ]= "World"; char s1[20]= "Hello"; strcat(s1, s2); printf("Source string = %s\n", s2); printf("Target string = %s\n", s pair of characters
#include <stdio.h> #include <string.h> int main() { char s1[ ]= "Hello"; char s2[ ]= "World"; int i, j; i = strcmp(s1, "Hello"); j = strcmp(s1, s2); printf("%d \n %d\n", i, j); return 0; }
-15
2D Array of Characters
Same as 2 D array of integers and other data types, we have 2 D array of characters also.
For example, we can write
char names[4][10] = {
"Andrew",
"Kary",
"Brown",
"Lawren"
};
|
https://www.codesdope.com/c-string/
|
CC-MAIN-2021-39
|
refinedweb
| 972
| 77.57
|
The HTML canvas is an HTML tag,
<canvas>, which is an element where we can draw to using the Canvas API.
Create a canvas
Creating a canvas is as simple as dropping a
<canvas></canvas> in a blank HTML file:
You don’t see anything in the page because the canvas is an invisible element. Let’s add some border:
Chrome automatically adds an 8px margin to the
body element. This is why our border looks like a frame, and you can remove that margin by setting
body { margin: 0; }
We’ll leave the default for now.
Our canvas is now reachable from JavaScript using the DOM Selectors API, so we can use
document.querySelector():
const canvas = document.querySelector('canvas')
Change the background color of a canvas
You do that in CSS:
canvas { background-color: lightblue; }
Resize a canvas
You can set the width and height in CSS:
canvas { border: 1px solid black; width: 100%; height: 100%; }
and in this way the canvas will expand to fill all the outer element size.
If you put the canvas as a first level element in the HTML, the above code will expand the canvas to fit the entire body.
The body is not filling the entire window size. To fill the entire page instead we need to use JavaScript:
canvas.width = window.innerWidth canvas.height = window.innerHeight
If you now remove the body margin and set the background of the canvas using CSS, we can fill our entire page with the canvas and we can start drawing on it:
If the window resizes we need to recalculate the canvas width as well, using a debounce to avoid calling too many times our canvas resizing (the
resize event can be called hundreds of times as you move the window with the mouse, for example):
const debounce = (func) => { let timer return (event) => { if (timer) { clearTimeout(timer) } timer = setTimeout(func, 100, event) } } window.addEventListener('resize', debounce(() => { canvas.width = window.innerWidth canvas.height = window.innerHeight }))
Get a context from the canvas
We want to draw to the canvas.
To do this, we need to get a context:
const c = canvas.getContext('2d')
Some assign the context to a variable named
c, some
ctx- it’s a common way to shortcut “context”
The
getContext() method returns a drawing context on the canvas, according to the type that you pass as parameter.
Valid values are
2d, the one we’ll use
webglto use WebGL version 1
webgl2to use WebGL version 2
bitmaprendererto use with ImageBitmap
Based on the context type, you can pass a second parameter to
getContext() to specify additional options.
In the case of the
2d context, we basically have one parameter we can use in all browsers, and it’s
alpha, a boolean that defaults to true. If set to false, the browser knows the canvas does not have a transparent background and can speed up rendering.
Draw elements to a canvas
With the context we can now draw elements.
We have several methods to do so. We can draw:
- text
- lines
- rectangles
- paths
- images
and for each of those elements we can alter the fill, the stroke, the gradient, the pattern, the shadow, rotate them, scale and perform a lot of operations.
Let’s start with the simplest thing: a rectangle.
The
fillRect(x, y, width, height) method serves this purpose:
c.fillRect(100, 100, 100, 100)
This is going to draw a black rectangle of 100 x 100 pixels, starting from position x 100 and y 100:
You can color the rectangle by using the
fillStyle() method, passing any valid CSS color string:
c.fillStyle = 'white' c.fillRect(100, 100, 100, 100)
You can get creative now and draw many things in this way:
for (let i = 0; i < 60; i++) { for (let j = 0; j < 60; j++) { c.fillStyle = `rgb(${i * 5}, ${j * 5}, ${(i+j) * 50})` c.fillRect(j * 20, i * 20, 10, 10) } }
or
for (let i = 0; i < 60; i++) { for (let j = 0; j < 60; j++) { c.fillStyle = `rgb(${i * 5}, ${j * 5}, ${(i+j) * 50})` c.fillRect(j * 20, i * 20, 20, 20) } }
Drawing elements
As mentioned you can draw many things:
- text
- lines
- rectangles
- paths
- images
Let’s just see a few of them, rectangles and text, to get the gist of how things work. You can find the API for all the rest that you need here.
Changing the colors
Use the
fillStyle and
strokeStyle properties to change the fill and stroke colors of any figure. They accept any valid CSS color, including strings and RGB calculations:
c.strokeStyle = `rgb(255, 255, 255)` c.fillStyle = `white`
Rectangles
- clearRect(x, y, width, height)
- fillRect(x, y, width, height)
- strokeRect(x, y, width, height)
We saw
fillRect() in the previous section.
strokeRect() is similar in how it’s called, but instead of filling a rect, it just draws the stroke using the current stroke style (which can be changed using the
strokeStyle context property):
const c = canvas.getContext('2d') for (let i = 0; i < 61; i++) { for (let j = 0; j < 61; j++) { c.strokeStyle = `rgb(${i * 5}, ${j * 5}, ${(i+j) * 50})` c.strokeRect(j * 20, i * 20, 20, 20) } }
clearRect() sets an area as transparent:
Text
Drawing text is similar to rectangles. You have 2 methods
- fillText(text, x, y)
- strokeText(text, x, y)
which let you write text on the canvas.
x and
y refer to the bottom-left corner.
You change the font family and size using the
font property of the canvas:
c.>IMAGE_11<<
There are other properties you can change, related to text (* = default):
textAlign(start*, end, left, right, center)
textBaseline(top, hanging, middle, alphabetic*, ideographic, bottom)
direction(ltr, rtl, inherit*)
Lines
To draw a line you first call the
beginPath() method, then you provide a starting point with
moveTo(x, y), and then you call
lineTo(x, y) to make the line to that new coordinates set. You finally call
stroke():
c.beginPath() c.moveTo(10, 10) c.lineTo(300, 300) c.stroke()
The line is going to be colored according to the
c.strokeStyle property value.
A more complex example
This code creates a canvas that generates 800 circles:
Every circle is perfectly contained in the canvas, and its radius is randomized.
Any time you resize the window, the elements are regenerated.
You can play around with on Codepen.
const canvas = document.querySelector('canvas') canvas.width = window.innerWidth canvas.height = window.innerHeight const c = canvas.getContext('2d') const circlesCount = 800 const colorArray = [ '#046975', '#2EA1D4', '#3BCC2A', '#FFDF59', '#FF1D47' ] const debounce = (func) => { let timer return (event) => { if (timer) { clearTimeout(timer) } timer = setTimeout(func, 100, event) } } window.addEventListener('resize', debounce(() => { canvas.width = window.innerWidth canvas.height = window.innerHeight init() })) const init = () => { for (let i = 0; i < circlesCount; i++) { const radius = Math.random() * 20 + 1 const x = Math.random() * (innerWidth - radius * 2) + radius const y = Math.random() * (innerHeight - radius * 2) + radius const dx = (Math.random() - 0.5) * 2 const dy = (Math.random() - 0.5) * 2 const circle = new Circle(x, y, dx, dy, radius) circle.draw() } } const Circle = function(x, y, dx, dy, radius) { this.x = x this.y = y this.dx = dx this.dy = dy this.radius = radius this.minRadius = radius this.color = colorArray[Math.floor(Math.random() * colorArray.length)] this.draw = function() { c.beginPath() c.arc(this.x, this.y, this.radius, 0, Math.PI * 2, false) c.strokeStyle = 'black' c.stroke() c.fillStyle = this.color c.fill() } } init()
Another example: animating elements on the canvas
Based on the example above, we animate the elements using a loop. Every circle has its own “life” and moves within the borders of the canvas. When the border is reached it bounces back:
See the Pen HTML Canvas fun with circles, not interactive by Flavio Copes (@flaviocopes) on CodePen.
We achieve this by using
requestAnimationFrame() and slightly moving the image at every frame rendering iteration.
Interact with the elements on the canvas
Here is the above example expanded to let you interact with the circles using the mouse.
When you hover the canvas, the items near your mouse will increase in size, and they will return back to normal when you move somewhere else:
See the Pen HTML Canvas fun with circles by Flavio Copes (@flaviocopes) on CodePen.
How does it work? Well, first I track the mouse position using 2 variables:
let mousex = undefined let mousey = undefined window.addEventListener('mousemove', (e) => { mousex = e.x mousey = e.y })
Then we use those variables inside the update() method of Circle, to determine if the radius should increase (or decrease):
if (mousex - this.x < distanceFromMouse && mousex - this.x > -distanceFromMouse && mousey - this.y < distanceFromMouse && mousey - this.y > -distanceFromMouse) { if (this.radius < maxRadius) this.radius += 1 } else { if (this.radius > this.minRadius) this.radius -= 1 }
distanceFromMouse is a value expressed in pixels (set to 200) that defines how far we want the circles to react to the mouse.
Performance
If you try to edit those projects above and add a bunch more circles and moving parts, you’ll probably notice performance issues. Browsers consume a lot of energy to render the canvas with animations and interactions, so pay attention so that the experience is not ruined on less performant machines than yours.
In particular I had issues when trying to create a similar experience with emojis rather than circles, and I found that text takes a lot more power to render, and so it was sluggish pretty quickly.
See the Pen HTML Canvas fun with Emojis by Flavio Copes (@flaviocopes) on CodePen.
This page on MDN lists many performance tips.
Closing words
This was just an introduction to the possibilities of Canvas, an amazing tool that you can use to create incredible experiences on your web pages.
|
https://flaviocopes.com/canvas/
|
CC-MAIN-2018-47
|
refinedweb
| 1,619
| 64.3
|
This?
Ivan posts a link to the "GHOST TOWN" - a story of a real girl riding on a motorbike through the closed Chernobyl area, where nuclear powerplant has exploded back in 1986. Lots of fantastic photos. Abandoned cities 18 years after the disaster. Deadly amazing and sad story.
I've been there 2 years ago. My mother was born and grew up in a region close to that area (near to Kiev) and that's where my granddad and grandma rest in peace. They've been told about the nuclear disaster a week after! Fucking commy wanted to keep it in a secret. Sad story.
May be I missed something, but looks like Travis Bright is converting Apache Xerces XML parser to .NET. I wonder what for?
Aha, he's PM for the Java Language Conversion Assistant (JLCA). That explains. Btw, one day I stumbled across CSS parsing in .NET. Java version of the product I've been working on used W3C's Flute CSS parser and I didn't manage to find any .NET CSS parsers. So I just created J# project in Visual Studio, imported Flute's java sources, compiled them into dll with no any hitch and that's it - it just worked..
Apparently it's possible to set a background image in VisualStudio.NET text editor via undocumented API. Interesting exercise.
[Via Mike Gunderloy]
Daniel says he's disappointed in SAX.NET project I was writing about. Unlike lazy me, he downloaded it and inspected implementation. Well, I mostly agree with him. This piece of direct thoughtless porting of complex convolute Java API to .NET looks weird and kinda unnatural. "namespace System.Xml.Sax {" isn't what I like to see altogether. Too many conventions are broken. Too bad to taste good. Well, I hope are they will improve things. At least todays conversation in xml-dev gives some hopes to think that.
Attributes2 and friends are traces of long SAX API evolution. SAX was created in a vacuum, there wasn't standard Java XML API at that time, while SAX.NET is going to live in .NET land and must be System.Xml-friendly. The question is is it feasible at all?.
A long and convolute discussion about security problems of using EXSLT.NET in ASP.NET took place in EXSLT.NET message board. Here I'd like to formulate some short summary.
Looks like Google got new site skin. I like it. Lightweight and clean.
This webcast is going to be really interesting one:
MSDN Webcast: Real-World BizTalk Server 2004 Editing and Mapping Techniques - Level 200
This.
Presenter: John Ballard, Program Manager, Microsoft
[Via Frank Arrigo]
May be I missed the train, but look what I discovered in the recent "Microsoft This Week" newsletter: MSN toolbar. It looks exactly like Google toolbar, moreover what's funny, and pages are just the twins!
After all that's good move. I hope the competition is going to be fruitful for us, ordinar users. Let's compare. Both can block pop-up ads, search the net (obviously), keyword highlight. MSN toolbar can launch MSN Hotmail, MSN Messenger and My MSN directly, Google toolbar can't. Google toolbar can fill in forms with one click, MSN toolbar can't. Well, personally, as 1) I'm not using MSN Hotmail and My MSN, 2) my MSN messenger starts at Windows startup; 3) I like/use Google's autofill feature a lot - I still stay with Google toolbar.
Visual Studio 2005 Community Technology Preview March 2004 - Full DVD available for MSDN subscribers!!
Hey, SAX for .NET topic is becoming hot. I was aware of one implementation (to be unveiled really soon), being developed by my fellow MVP/XmlInsider, but apparently there is another one, by Karl Waclawek. Here is what he writes in xml-dev mail list:
The SAX dot NET project on SourceForge has the
goal of porting the SAX2 API to C#/.NET:
A release 0.9 (beta) can be downloaded from: .
What's mostly untested is the helper classes.
The rest has already undergone usability testing
in the form of implementing an adapter for Expat.
Karl is known as seasoned XML developer, I particularly know him as one of Expat XML parser devs. Expat is a great XML parser, originally developed by James Clark (enough said). It's written in C and is magically fast one. At my work we use it as base XML parser for all XML tools we write for mainframes, yeah, that's really good one.
I'm glad to see growing appreciation of the .NET in broad XML community and I do believe community-developed implementation of SAX for .NET would be great step forward this way. The very fact of emerging SAX for .NET projects doesn't mean XmlReader's pull-based XML parsing paradigm is bad or disappoints some of us. Both pull and push parsing paradigms have pros and contras and having both available in .NET is a good sign of the technology maturity.
Here is a definitive answer:
Beginners always ask this question.
Those with a little experience express their opinions passionately.
Experts tell you there is no right answer.Mike Kay.
Here I go again with another experimental creature from my XML Bestiary: IndexingXPathNavigator. This one was inspired by Mark Fussell's "Indexing XML, XML ids and a better GetElementByID method on the XmlDocument class". I've been experimenting with Mark's extended XmlDocument, played a bit with XPathDocument and "idkey()" extension function Mark was talking about. Finally I came to a conclusion that 1)XPath is the way to go (that' not the first time I say it, right? :) and thus what should be extended is XPathNavigator; 2)no need to reinvent the wheel as XSLT's keys is proved excellent stuff. That is what IndexingXPathNavigator is - XPathNavigator, augmented with XSLT keys functionality: it supports declaring keys, lazy or eager indexing and retriving indexed nodes via key() function all as per familiar and proved XSLT semantics.]
X.
Hey, apparently recent Opera browser beta has RSS reader embedded. Here are some screenshorts - here and here. I like that trend.
[Via 10x More Productive Blog]
Ok, I've implemented EXSLT Random module, which consists of the only function random:random-sequence() for EXSLT.NET library. Here is how it looks now:".
Interesting new blog at blogs.msdn.com - "C# Frequently Asked Questions", where the C# team posts answers to common C# questions. Subscribed. Why doesn't C# support default parameters? Why doesn't C# support multiple inheritance? Why doesn't C# support #define macros? Ask your question here.
Watch out for some improvements in the Workspaces bug tracker next week (Tuesday 3/16/04)..
Somehow it happened that one of the most commonly used XmlReader usage patterns ignores NameTable.
That's really unfortunate! Everybody, including Microsofties, MVPs and of course zillions of users blindly follow it, carelessly slowing down XmlReader's parsing speed.
XML.com has published good article "Using XML Catalogs with JAXP". XML Catalogs are successors of SGML Catalogs and in simple words it's a system for defining resolving of resource identifiers (URIs or Public Identifiers) in XML. If you are .NET minded - it's about having XML document (called catalog), where you declaratively define how URIs in DOCTYP, xsi:schemaLocation, xsl:include/xsl:import/document() etc should be resolved by XmlResolver. So instead of writing you own XmlResolver you declare that "" should be resolved as "C:/dtds/xhtml1-strict.dtd" in catalog file and get things done.
I'm sure many of you know this page, but for the rest - here is useful link to default Visual Studio .NET shortcut keys. I like this stuff. My favorite one is CTRL + TAB to navigate over opened files.
[Via Jason Mauss]
I'm going to implement EXSLT Random module for EXSLT.NET lib. It contains the only extension function: number+ random:random-sequence(number?, number?)
The function returns a sequence of random numbers between 0 and 1 (as text nodes obviously). The first argument is number or random numbers to generate (1 by default) and the second one is a seed.
The problem is that .NET's Random class accepts seed as int, while in XPath numbers are double. So simple (I hope) question: how do you think it should be converted?
Hey, look at what Scott Woodgate writes:
Let.
How cool is that?
Dare writes:
We were planning to add support for xml:base to the core XML parser as part of implementing XInclude but given that that it recently went from being a W3C candidate recommendation to going back to being a W3C working draft (partly due to a number of the architectural issues raised by Murata Makoto) the future of the spec is currently uncertain so we've backed off on our implementation.
Yeah, XInclude makes its tangled way to the Recommendation status really slowly. It's been CR for a long time and even there were some slips about PR, but then it's been backed off to WD soapbox again. These days XInclude is ready to climb up again. Many architectural issues have been fixed, syntax and semantics have been modified with respect to the Web architecture, there are enough full implementations for all major platforms (.NET, Java, C). And here is what Jonathan Marsh writes today in www-xml-xinclude-comments@w3.org maillist:
We believe this is resolution completes our resolution of outstanding
issues on XInclude, and we plan to release a new CR draft soon.
By the way, I wrote an article about XInclude and XInclude.NET and hope it'll be published soon.
Meantime some of you guys sent me logos for the XInclude.NET logo contest. Thanks! I'm going to arrange a page with a poll to see public opinions.
This page is an archive of entries from March 2004 listed from newest to oldest.
February 2004 is the previous archive.
April 2004 is the next archive.
Find recent content on the main index or look in the archives to find all content.
|
http://www.tkachenko.com/blog/archives/2004/03/
|
CC-MAIN-2018-05
|
refinedweb
| 1,679
| 67.55
|
What I want to do is so simple, I'm still trying to learn ASP.NET with c# and MVC application but I'm just having a lot of difficulty getting a simple example to go through, then I can grow from it, here's how it goes: I have a simple html5 form that's method is GET, the type is text and I basically want to submit a text into my mvc controller, once my controller get's it, I want it to output that string 'worked' through HTML5, how do I do this?
summary: string 'worked' --> html form --> c# controller --> html (view?)
here's what I got for my 'view' (Search.cshtml)
<form action="Home/Search" method="get">
<input type="text" name="q" />
<input type="submit" value="Search" />
public ActionResult Search(string q)
{
return this.View(q?); // so what exactly is View(q)? what is view returning? }
<p> q </p> <!-- q = 'worked' -->
In your controller, you are calling the
View(...) method incorrectly. The
View(...) method expects the string parameter you're passing to be the path to the razor view you're trying to render.
A quick and simple way to pass the
q variable from your controller to a view to be rendered is using
ViewBag.
If you have a razor view named /Views/Search.cshtml you would do:
public class MyController : Controller { public ActionResult Search(string q) { ViewBag.Query = q; return View("~/Views/Search.cshtml"); } }
Then in /Views/Search.cshtml use it like this:
<p>@ViewBag.Query</p>
|
https://codedump.io/share/EEXQppwSsMOF/1/how-do-i-send-data-to-aspnet-controller-so-that-it-can-return-a-view
|
CC-MAIN-2017-43
|
refinedweb
| 253
| 74.39
|
Java Notes
Array Library Methods
Static methods for manipulating arrays are available in
the
java.util.Arrays and
java.System classes.
Assume the following declarations, where
T is the array element type,
either a primitive, object or either kind of type depending on which method is being called.
T x, key; // This type T can be either a primitive or object type. T[] a, a2; // This type T can be either a primitive or object type. List<T> lst; // Only object types T. Comparator<? super T> comp; // Only object types T. int i; // Index returned from search. int i1, i2; int from; // Lower bound of subscript range. Includes this element. int to; // Upper bound of subscript range. Does not include this element. String s; boolean b; // Normally the method result will be used in an if.
Use
java.util.Arrays.toString(...) for debugging
AVOID default toString(). Using the common debugging technique of printing
intermediate values on the console with arrays is good, but requires calling
a library method to get the results you expect.
Arrays are a subclass of
Object as
ever other object is, and therefore the
toString() method is defined
for every array. The results will very likely disappoint you.
String[] abc = {"a", "b", "c"}; System.out.println(abc); // println calls toString() on object parameters.
This displays the relatively unhelpful "[L", element type, ";@", and hexadecimal memory address of the array.
[Ljava.lang.String;@f6f1b6
USE
java.util.Arrays.toString(...) and you'll get something much more readable.
An
import java.util.*; statement avoids the extra package qualifier.
String[] abc = {"a", "b", "c"}; System.out.println(java.util.Arrays.toString(abc));
This produces the following output, which is the same as
toString()
applied to a List.
[a, b, c]
Use
Arrays.asList(...) for Collections view of array
Make a list. The
Arrays class has some useful methods
for working with arrays, but there are a lot more if you make your array look
like a List. The Collections class has many useful methods, eg,
shuffle, reverse, max, ....
See Collections Class.
import java.util.*; . . . List<String> listabc = Arrays.asList(abc); // The Collections methods can now be use with listabc.
View of underlying array. The list that is created is not a regular
ArrayList. It is based only on the original array, so you can't
do things like adding elements to the list because this would cause reallocation
of a larger underlying array.
Uses the underlying array. One of the really nice things about this is that you can still use the underlying array, which means that creating this list doesn't prevent you from taking advantage of the efficiency of array access for example.
Objects only. The
java.util.Arrays.asList(...) method
only works for arrays of object types. If you try to use it with an array
of primitives, you will get a compilation error.
Shuffling (randomizing) an array
import java.util.*; . . . List<String> labc = Arrays.asList(abc); . . . Collections.shuffle(labc); System.out.println(labc);
Which gave the following output. Note that the
List overrides
toString() to give reasonable output.
[b, a, c]
Inefficient? Not necessarily. Constructing a new List object from the array is only inefficient if you're doing it frequently. If you do it very often then you want to avoid creating a new list each time. Creating the list one time is not a problem because the list that is produced is backed by this one array -- you can change elements in the array and the list will reflect those changes.
Can't shuffle arrays of primitives this way!
Lists and other Collections classes only work on objects, not on primitive values, so you have to do the shuffling "by hand". See Random Numbers - shuffling for how to do it yourself.
Commentary
A
shuffle(...) method that works with primitive types
should be added to the
Arrays class.
|
http://www.fredosaurus.com/notes-java/data/arrays/arrays-library.html
|
CC-MAIN-2021-49
|
refinedweb
| 643
| 68.47
|
This class is deprecated. More...
#include <XercesAttrBridge.hpp>
NodeListthat contains all children of this node.
NamedNodeMapcontaining the attributes of this node (if it is an
Element) or
nullotherwise.
DOM_Documentobject associated with this node.
newChildbefore the existing child node
refChild.
oldChildwith
newChildin the list of children, and returns the
oldChildnode.
oldChildfrom the list of children, and returns it.
newChildto the end of the list of children of this node.
DOM_Textnodes in the full depth of the sub-tree underneath this
DOM_Node, including attribute nodes, into a "normal" form where only markup (e.g., tags, comments, processing instructions, CDATA sections, and entity references) separates
DOM_Textnodes, i.e., there are no adjacent
DOM_Textnodes.
nullif it is unspecified.
nullif it is unspecified.
Elementnode this attribute is attached to or
nullif this attribute is not in use.
This class is deprecated.
Definition at line 57 of file XercesAttrBridge.hpp.
Adds the node
newChild to the end of the list of children of this node.
If the
newChild is already in the tree, it is first removed.
Returns a duplicate of this node.
This function.
Returns true if the attribute received its value explicitly in the XML document, or if a value was assigned programatically with the setValue function.
Returns false if the attribute value came from the default value declared in the document's DTD.
Get the Xerces node this instance represent.
Definition at line 471 of file XercesAttrBridge.hpp.
This is a convenience method to allow easy determination of whether a node has any children.
trueif the node has any children,
falseif the node has no children.. Note that a
DOM_Node that has never been assigned to refer to an actual node is == null.
Tests whether the DOM implementation implements a specific feature and that feature is supported by this node.
trueif the specified feature is supported on this node,
falseotherwise.
Puts all
DOM_Text nodes in the full depth of the sub-tree underneath this
DOM_Node, including attribute nodes, into a "normal" form where only markup (e.g., tags, comments, processing instructions, CDATA sections, and entity references) separates
DOM_Text nodes, i.e., there are no adjacent
DOM_CDATASections, the normalize operation alone may not be sufficient, since XPointers do not differentiate between
DOM_Text nodes and
DOM_CDATASection nodes.
Removes the child node indicated by
oldChild from the list of children, and returns it.
Replaces the child node
oldChild with
newChild in the list of children, and returns the
oldChild node.
If
newChild is a
DOM_DocumentFragment object,
oldChild is replaced by all of the
DOM_DocumentFragment children, which are inserted in the same order.
If the
newChild is already in the tree, it is first removed.
Sets the value of the node.
Any node which can have a nodeValue (
For most types of Node, value is null and attempting to set it will throw DOMException(NO_MODIFICATION_ALLOWED_ERR). This will also be thrown if the node is read-only.
Set the namespace prefix of this node.
Note that setting this attribute, when permitted, changes the
nodeName attribute, which holds the qualified name, as well as the
tagName and
name attributes of the
DOM_Element and
DOM_Attr interfaces, when applicable.
Note also that changing the prefix of an attribute, that is known to have a default value, does not make a new attribute with the default value and the original prefix appear, since the
namespaceURI and
localName do not change.
Sets the value of the attribute.
A text node with the unparsed contents of the string will be created.
Interpreting class diagrams
Doxygen and GraphViz are used to generate this API documentation from the Xalan-C header files.
|
http://xalan.apache.org/xalan-c/apiDocs/classXercesAttrBridge.html
|
CC-MAIN-2019-22
|
refinedweb
| 596
| 57.98
|
I am trying to write a script that will shut down the computer if a few requirements are filled with the command
os.system("poweroff")
os.system("shutdown now -h")
Many of the linux distributions out there require super user privileges to execute
shutdown or
halt, but then, how come that if you're sitting on your computer you can power it off without being root? Well... the idea behind this is that if you have physical access to the computer, you could pretty much pull the power cord and power it off anyways, so nowadays, many distributions allow power-off though access to the local System Bus accessible through
dbus. Problem with
dbus (or the services exposed through it, rather)? It's constantly changing. I'd recommend installing a dbus viewer tool such as D-feet (be advised: it's still pretty hard to visualize, but it may help)
Take a look to these Dbus shutdown scripts.
If you still have HAL in your distrubution (is on the way to being deprecated) try this:
import dbus sys_bus = dbus.SystemBus() hal_srvc = sys_bus.get_object('org.freedesktop.Hal', '/org/freedesktop/Hal/devices/computer') pwr_mgmt = dbus.Interface(hal_srvc, 'org.freedesktop.Hal.Device.SystemPowerManagement') shutdown_method = pwr_mgmt.get_dbus_method("Shutdown") shutdown_method()
This works on a Ubuntu 12.04 (I just powered off my computer to make sure it worked). If you have something newer... well, it may not work. It's the downside of this method: it is very distribution specific.
You might have to install the
dbus-python package for this to work ()
UPDATE 1:
I've been doing a little bit of research and it looks like this is done in newer Ubuntu versions through ConsoleKit. I've tested the code below in my Ubuntu 12.04 (which has the deprecated HAL and the newer ConsoleKit) and it did shut my computer off:
>>> import dbus >>> sys_bus = dbus.SystemBus() >>> ck_srv = sys_bus.get_object('org.freedesktop.ConsoleKit', '/org/freedesktop/ConsoleKit/Manager') >>> ck_iface = dbus.Interface(ck_srv, 'org.freedesktop.ConsoleKit.Manager') >>> stop_method = ck_iface.get_dbus_method("Stop") >>> stop_method()
UPDATE 2:
Probably why can you do this without being
root deserves a bit of a wider explanation. Let's focus on the newer
ConsoleKit (
HAL is way more complicated and messy, IMHO).
The
ConsoleKit is a service running as
root in your system:
borrajax@borrajax:/tmp$ ps aux|grep console-kit root 1590 0.0 0.0 1043056 3876 ? Sl Dec05 0:00 /usr/sbin/console-kit-daemon --no-daemon
Now,
d-bus is just a message passing system. You have a service, such as ConsoleKit that exposes an interface to
d-bus. One of the methods exposed is the
Stop (shown above). ConsoleKit's permissions are controlled with PolKit, which (despite on being based on regular Linux permissions) offers a bit of a finer grain of control for "who can do what". For instance, PolKit can say things like "If the user is logged into the computer, then allow him to do something. If it's remotely connected, then don't.". If PolKit determines that your user is allowed to call ConsoleKit's
Stop method, that request will be passed by (or through)
d-bus to ConsoleKit (which will subsequently shutdown your computer because it can... because it worth's it... because it's
root)
Further reading:
To summarize: You can't switch a computer off without being
root. But you can tell a service that is running as
root to shutdown the system for you.
BONUS:
I read in one of your comments that you wanna switch the computer off after a time consuming task to prevent it from overheating... Did you know that you can probably power it on at a given time using RTC? (See this and this) Pretty cool, uh? (I got so excited when I found out I could do this... ) :-D
|
https://codedump.io/share/86BGFmGp8SDi/1/shutting-down-computer-linux-using-python
|
CC-MAIN-2017-43
|
refinedweb
| 636
| 65.22
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/math/distributions/arcsine.hpp>
namespace boost{ namespace math{ template <class RealType = double, class Policy = policies::policy<> > class arcsine_distribution; typedef arcsine_distribution<double> arcsine; // double precision standard arcsine distribution [0,1]. template <class RealType, class Policy> class arcsine_distribution { public: typedef RealType value_type; typedef Policy policy_type; // Constructor from two range parameters, x_min and x_max: arcsine_distribution(RealType x_min, RealType x_max); // Range Parameter accessors: RealType x_min() const; RealType x_max() const; }; }} // namespaces
The class type
arcsine_distribution
represents an arcsine
probability
distribution function. The arcsine distribution is named because
its CDF uses the inverse sin-1 or arcsine.
This is implemented as a generalized version with support from x_min to x_max providing the 'standard arcsine distribution' as default with x_min = 0 and x_max = 1. (A few make other choices for 'standard').
The arcsine distribution is generalized to include any bounded support a <= x <= b by Wolfram and Wikipedia, but also using location and scale parameters by Virtual Laboratories in Probability and Statistics Arcsine distribution. The end-point version is simpler and more obvious, so we implement that. If desired, this outlines how the Beta Distribution can be used to add a shape factor.
The probability density function PDF for the arcsine distribution defined on the interval [x_min, x_max] is given by:
f(x; x_min, x_max) = 1 /(π⋅√((x - x_min)⋅(x_max - x_min))
For example, Wolfram Alpha arcsine distribution, from input of
N[PDF[arcsinedistribution[0, 1], 0.5], 50]
computes the PDF value
0.63661977236758134307553505349005744813783858296183
The Probability Density Functions (PDF) of generalized arcsine distributions are symmetric U-shaped curves, centered on (x_max - x_min)/2, highest (infinite) near the two extrema, and quite flat over the central region.
If random variate x is x_min or x_max, then the PDF is infinity. If random variate x is x_min then the CDF is zero. If random variate x is x_max then the CDF is unity.
The 'Standard' (0, 1) arcsine distribution is shown in blue and some generalized examples with other x ranges.
The Cumulative Distribution Function CDF is defined as
F(x) = 2⋅arcsin(√((x-x_min)/(x_max - x))) / π
arcsine_distribution(RealType x_min, RealType x_max);
constructs an arcsine distribution with range parameters x_min and x_max.
Requires x_min < x_max, otherwise domain_error is called.
For example:
arcsine_distribution<> myarcsine(-2, 4);
constructs an arcsine distribution with x_min = -2 and x_max = 4.
Default values of x_min = 0 and x_max =
1 and a
typedef arcsine_distribution<double> arcsine;
mean that
arcsine as;
constructs a 'Standard 01' arcsine distribution.
RealType x_min() const; RealType x_max() const;
Return the parameter x_min or x_max from which this distribution was constructed.
So, for example:
using boost::math::arcsine_distribution; arcsine_distribution<> as(2, 5); // Constructs a double arcsine distribution. BOOST_MATH_ASSERT(as.x_min() == 2.); // as.x_min() returns 2. BOOST_MATH_ASSERT(as.x_max() == 5.); // as.x_max() returns 5. arcsine distribution is useful to describe Random walks, (including drunken walks) Brownian motion, Weiner processes, Bernoulli trials, and their application to solve stock market and other ruinous gambling games.
The random variate x is constrained to x_min and x_max, (for our 'standard' distribution, 0 and 1), and is usually some fraction. For any other x_min and x_max a fraction can be obtained from x using
fraction = (x - x_min) / (x_max - x_min)
The simplest example is tossing heads and tails with a fair coin and modelling the risk of losing, or winning. Walkers (molecules, drunks...) moving left or right of a centre line are another common example.
The random variate x is the fraction of time spent on the 'winning' side. If half the time is spent on the 'winning' side (and so the other half on the 'losing' side) then x = 1/2.
For large numbers of tosses, this is modelled by the (standard [0,1]) arcsine distribution, and the PDF can be calculated thus:
std::cout << pdf(as, 1. / 2) << std::endl; // 0.637 // pdf has a minimum at x = 0.5
From the plot of PDF, it is clear that x = ½ is the minimum of the curve, so this is the least likely scenario. (This is highly counter-intuitive, considering that fair tosses must eventually become equal. It turns out that eventually is not just very long, but infinite!).
The most likely scenarios are towards the extrema where x = 0 or x = 1.
If fraction of time on the left is a ¼, it is only slightly more likely because the curve is quite flat bottomed.
std::cout << pdf(as, 1. / 4) << std::endl; // 0.735
If we consider fair coin-tossing games being played for 100 days (hypothetically continuously to be 'at-limit') the person winning after day 5 will not change in fraction 0.144 of the cases.
We can easily compute this setting x = 5./100 = 0.05
std::cout << cdf(as, 0.05) << std::endl; // 0.144
Similarly, we can compute from a fraction of 0.05 /2 = 0.025 (halved because we are considering both winners and losers) corresponding to 1 - 0.025 or 97.5% of the gamblers, (walkers, particles...) on the same side of the origin
std::cout << 2 * cdf(as, 1 - 0.975) << std::endl; // 0.202
(use of the complement gives a bit more clarity, and avoids potential loss of accuracy when x is close to unity, see why complements?).
std::cout << 2 * cdf(complement(as, 0.975)) << std::endl; // 0.202
or we can reverse the calculation by assuming a fraction of time on one side, say fraction 0.2,
std::cout << quantile(as, 1 - 0.2 / 2) << std::endl; // 0.976 std::cout << quantile(complement(as, 0.2 / 2)) << std::endl; // 0.976
Summary: Every time we toss, the odds are equal, so on average we have the same change of winning and losing.
But this is not true for an an individual game where one will be mostly in a bad or good patch.
This is quite counter-intuitive to most people, but the mathematics is clear, and gamblers continue to provide proof.
Moral: if you in a losing patch, leave the game. (Because the odds to recover to a good patch are poor).
Corollary: Quit while you are ahead?
A working example is at arcsine_example.cpp including sample output .
The arcsine distribution with x_min = 0 and x_max = 1 is special case of the Beta Distribution with α = 1/2 and β = 1/2.
This distribution is implemented using sqrt, sine, cos and arc sine and
cos trigonometric functions which are normally accurate to a few machine epsilon.
But all values suffer from loss
of significance or cancellation error for values of x
close to x_max. For example, for a standard [0, 1]
arcsine distribution as, the pdf is symmetric about
random variate x = 0.5 so that one would expect
pdf(as, 0.01) ==
pdf(as, 0.99). But
as x nears unity, there is increasing loss
of significance. To counteract this, the complement versions of
CDF and quantile are implemented with alternative expressions using cos-1
instead of sin-1. Users should see why
complements? for guidance on when to avoid loss of accuracy by using
complements.
The results were tested against a few accurate spot values computed by Wolfram Alpha, for example:
N[PDF[arcsinedistribution[0, 1], 0.5], 50] 0.63661977236758134307553505349005744813783858296183
In the following table a and b are the parameters x_min and x_max, x is the random variable, p is the probability and its complement q = 1-p.
The quantile was calculated using an expression obtained by using Wolfram Alpha to invert the formula for the CDF thus
solve [p - 2/pi sin^-1(sqrt((x-a)/(b-a))) = 0, x]
which was interpreted as
Solve[p - (2 ArcSin[Sqrt[(-a + x)/(-a + b)]])/Pi == 0, x, MaxExtraConditions -> Automatic]
and produced the resulting expression
x = -a sin^2((pi p)/2)+a+b sin^2((pi p)/2)
Thanks to Wolfram for providing this facility.
|
https://www.boost.org/doc/libs/1_77_0/libs/math/doc/html/math_toolkit/dist_ref/dists/arcine_dist.html
|
CC-MAIN-2021-49
|
refinedweb
| 1,320
| 56.96
|
I have been able to run the sikuli scripts in intellij and pycharm both however when I try to debug It I get the following error
Traceback (most recent call last):
File "C:\Users\
from _pydevd_
File "C:\Users\
from _pydev_
File "C:\Users\
import socket
File "C:\jython2.
from _socket import (
File "C:\jython2.
from collections import namedtuple, Iterable
ImportError: cannot import name Iterable
Windows 10 64 bit
Intellij community edition 2017.3.4
Sikuli 1.1.2
jython 2.7.1
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Solved:
- 2018-03-26
- Last query:
- 2018-03-26
- Last reply:
- 2018-03-23
But when I am running the IDE the first line says the below.
C:\jython2.
When I try to set up the 2.7.0 jython as the interpreter and install the packages I get the following error:
Traceback (most recent call last):
File "C:\Users\
import setuptools
File "C:\Users\
from setuptools.
File "C:\Users\
from pkg_resources.
File "C:\Users\
import email.parser
File "C:\jython2.
from email.feedparser import FeedParser
File "C:\jython2.
from email import message
File "C:\jython2.
import email.charset
File "C:\jython2.
import email.base64mime
File "C:\jython2.
from email.utils import fix_eols
File "C:\jython2.
import socket
File "C:\jython2.
from _socket import (
File "C:\jython2.
from collections import namedtuple, Iterable
ImportError: cannot import name Iterable
I just made a test on my Mac with Pycharm - works
- PyCharm uses the 2.7.0 interpreter, but my SikuliX is setup with 2.7.1
I had the problem, that the <appdata>
I deleted the folder and ran a PythonConsole in PyCharm with these additional startup lines:
import org.sikuli.
from sikuli import *
this restored the Lib folder.
This PythonConsole is helpful to do some quick checks or run a script.
After stopping and restarting PyCharm it worked.
In my run configuration for the project I have a pointer to sikulixapi.jar in the EnvVariables field
That totally worked Raiman ! thanks for the solution.
Thanks RaiMan, that solved my question.
I am also trying to make selenium work with sikuli scripts in the pycharm. any ideas about how to do it ?
Thankyou RAiman it actually worked!
check your setup:
you say you have Jython 2.7.1 but the stack trace says:
File "C:\jython2.
7.0\Lib\ socket. py", line 3, in <module>
from _socket import (
|
https://answers.launchpad.net/sikuli/+question/667140
|
CC-MAIN-2018-17
|
refinedweb
| 406
| 69.28
|
Django has inbuilt support for translation. To add support of multiple languages in Django follow these steps.
Let's assume we are going to support English and Japanese in a Django application.
settings.pyfile make the below changes
1.1. Import the
gettext_lazy function.
from django.utils.translation import gettext_lazy as _
1.2. define a list of tuples. The first item in the tuple will be language code and the second item will be the language label.
LANGUAGES = [
('en', _('English')),
('ja', _('Japanese')),
]
1.3. Internationalization and translation settings should be True
USE_I18N = True
USE_L10N = True
1.4. Add language session key and language cookie key. These keys are used to store language code in session and in cookies.
LANGUAGE_SESSION_KEY = 'session_language_appname'
LANGUAGE_COOKIE_NAME = 'cookie_language_appname' # it could be anything
1.5. Add LocaleMiddleware in the list of middlewares. It should be after SessionMiddleware and before CommonMiddleware.
MIDDLEWARE = [
'django.middleware.security.SecurityMiddle',
]
1.6. Now create a directory 'locale' in your app folder and add the path to the LOCALE_PATHS in the settings.py file. This is the list of paths where Django will look the translation files. Here we are creating a separate locale directory for each application.
LOCALE_PATHS = [
os.path.join(BASE_DIR, 'app_name/locale')
]
2.1. In each template where you want to translate the labels or strings, import the
i18n template tag at the top of the page.
{% load i18n %}
2.2. Now use the
trans template tag with the labels on page. For example on the login page, to translate the 'password' label as per active language, do this
{% trans 'password' %}
2.3. Now go inside the app directory and run the below command to generate the message translation file.
django-admin makemessages -l ja
Here
-l flag is used to mention language. 'ja' is for Japanese. This will generate a file at the below path.
app_name/locale/ja/LC_MESSAGES/django.po
2.4. Open and edit the
django.po file. You will find multiple records in the combination of 'msgid' and 'msgstr'. msgid is the default string label that needs to be converted. msgstr is the converted string. You will see one entry with 'msgid' equals to 'password' and 'msgstr' as an empty string. Find the Japanese translation of 'password' and put it against 'msgstr'. Repeat this process for all English strings.
#: templates/app_name/login.html:26 msgid "password" msgstr "パスワード"
2.5. Now compile these messages by running the below command.
django-admin compilemessages
This will generate another file of compiled messages with the name
django.mo at the same location and is not in a human-readable format.
3.1 Django uses its own mechanism to discover the activated language. As you can see the last fallback is the LANGUAGE_CODE in the settings.py file. Change its value to 'ja' and reload the application. You will see all the labels in templates inside the trans template tag has been converted. If it is not working as expected, please check if the Japanese equivalent of a string is present in
django.po file.
4.1. Option 3.1 is good for development and testing only. Every user should be able to set their own prefered language in the application. For this, we have to provide an option to selected the prefered language on the website. In the header (or any page) of your application write the below code to display the option to set the prefered language.
{% get_available_languages as LANGUAGES %}
{% for lang in LANGUAGES %}
{% if LANGUAGE_CODE == lang.0 %}
{{lang.1}} ({{lang.0}})
{% else %}
<a href="{% url 'app_name:set_language' %}?l={{lang.0}}">{{lang.1}} ({{lang.0}})</a>
{% endif %} |
{% endfor %}
This code is listing all the available languages by picking the list of tuples from
settings.py file. Currently, active language is not clickable. We can set the other language as the prefered language by clicking on it.
4.2. Create an entry in the urls.py file.
urlpatterns += [
path(r'set-language/', views.set_language, name='set_language'),
]
4.3. Create a view with the name set_language.
@login_required
def set_language(request):
lang = request.GET.get('l', 'en')
request.session[settings.LANGUAGE_SESSION_KEY] = lang
# return to previous view
response = HttpResponseRedirect(request.META.get('HTTP_REFERER', '/'))
# set cookies as well
response.set_cookie(settings.LANGUAGE_COOKIE_NAME, lang)
return response
This view is setting the selected language in session and then in cookies. After the view is redirecting to the same page we were on.
4.4 Reload the application and you can select the prefered language from the header of the application. The page will reload and your pages will have the text displayed in the prefered language.
5.1. In your views.py file, import the gettext_lazy file.
from django.utils.translation import gettext as _
5.2. Wrap the text inside
gettext or
gettext_lazy function (which has been imported under an alias
_ )
msg = _('No entry in table')
5.3. Again repeat steps 2.3, 2.4 and 2.5.
Built-in Django models, their field labels and common buttons in the admin site are automatically taken care of.
6.1. To translate the user-defined models' names, use the verbose name of models.
class Meta:
managed = False
db_table = 'Person'
verbose_name = _('Person')
verbose_name_plural = _('Persons')
6.2. For field names, use help_text and verbose_name.
code = models.CharField(db_column='Code', primary_key=True, max_length=100,
help_text=_('Code'), verbose_name=_('Code'))
name = models.CharField(db_column='Name', max_length=500, help_text=_('Name'),
verbose_name=_('Name'))
For more points regarding admin-site, visit this article.
Everyone uses some or more javascript (or any framework of javascript) to do some basic validations at least. How to translate the messages returned by javascript functions.
7.1. Django have something called JavascriptCatalog which produces a javascript code with functions that mimic
gettext and other translation functions. Please refer to this official Django documentation which is straight forward.
7.2. I tried implementing JavascriptCatalog but could not get it working. Hence I wrote a small implementation of my own which just do the work for me in a small application.
In your javascript file, write a function that accepts the primary language text, English in our case and then return the translated text in the prefered language.
This function picks the prefered language from a hidden input field which was created along with the list of all available languages in step 4.1
<input type="hidden" id="current_language_code" name="current_language_code" value="{{LANGUAGE_CODE}}">
Javascript Code:
var m = {
'select category': 'カテゴリを選んでください',
'No files found for matching criteria': 'ファイルが見つかりません',
};
function getCurrentLanguage() {
return document.getElementById('current_language_code').value;
}
function getLangText(text) {
if (getCurrentLanguage() == 'ja') {
return m[text];
}
return text;
}
Now use this code to translate messages returned by Javascript functions as below.
document.getElementById('file_msg').innerHTML = getLangText('No files found for matching criteria');
This approach has limitations like it has support for two languages. To add support for more languages, we need to modify the code.
Please connect in case of any concern.
-.
|
https://pythoncircle.com/post/745/multiple-language-support-in-django-application-internationalization-in-django/
|
CC-MAIN-2021-43
|
refinedweb
| 1,150
| 59.8
|
Verbose data.table and uncovering hidden cedta’s data table awareness decisions
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Introduction
When speed and memory efficiency is important, the data.table package is one of the ways to improve those aspects of our R code dramatically. Including data.table in a package also comes with the added benefit of only importing the methods package, which is part of base R. We must also however pay attention to correctly importing and using methods, as data.table handles data.frame subsetting operators in a special way. This post is mostly a lesson learned for future self on how I did not pay attention and what I found out investigating.
TL;DR if you just want something useful
- Use
options(datatable.verbose = TRUE)to see useful logging information
- If you are getting weird errors with subset methods, check if data frame methods do not get called instead of the data table ones (e.g. running
traceback()after the error occurs)
- If so, check if
data.table:::cedta()returns
FALSEfor your package. And if it does, check if you import data.table in the NAMESPACE file of your package
A somewhat reproducible example of the issue
Imagine a very simple function that takes a data table and sums a column with a name provided via the
y argument, grouped by the column name provided via the
by argument. An oversimplified definition and example use with the
mtcars dataset could look as follows:
sumData <- function(dt, y, by) dt[, sum(get(y)), by = by] mtcarsdt <- data.table::as.data.table(datasets::mtcars) sumData(mtcarsdt, "disp", "gear") ## gear V1 ## 1: 4 1476.2 ## 2: 3 4894.5 ## 3: 5 1012.4
So far so good, everything works great. Now we put our awesome function into a nice package called
dtexample. Add some roxygen documentation, add data.table into Imports in our DESCRIPTION, try to install our package. All still works. Run R CMD check for good measure and get 0 errors, 0 warnings and 0 notes, like a boss!
Now let’s see our function in action, from within the new package:
dtexample::sumData(mtcarsdt, "disp", "gear") Error in get(y) : object 'disp' not found
Oops. Something went wrong. Debugging such an issue can be tricky, especially if this happened in a more realistic setting, such as writing the function across multiple days and having a more complicated function than a one-liner. Most often the issue is inside the actual code, especially when passing around more complicated quoted expressions into data table’s subsetting machinery.
Traceback and datatable.verbose to the rescue
Let us look at the
traceback() to get some insight into what is going on:
traceback() ## 5: get(y) ## 4: `[.data.frame`(x, i, j) ## 3: `[.data.table`(dt, , sum(get(y)), by = by) at sumData.R#12 ## 2: dt[, sum(get(y)), by = by] at sumData.R#12 ## 1: dtexample::sumData(dt, "disp", "gear")
Note the
4: despite the object being a data table (which is also confirmed by the third line of the traceback), the data frame method was called. It would also seem that this was deliberate on data table’s side. Let us turn on the
datatable.verbose option and see what it has to say:
options(datatable.verbose = TRUE) dtexample::sumData(mtcarsdt, "disp", "gear") ## cedta decided 'dtexample' wasn't data.table aware. Here is call stack with [[1L]] applied: ## [[1]] ## dtexample::sumData ## ## [[2]] ## `[` ## ## [[3]] ## `[.data.table` ## ## [[4]] ## cedta
Traceback and cedta()
So what is this
cedta()?
Looking at data table’s verbose output, we immediately notice this message:
cedta decided ‘dtexample’ wasn’t data.table aware. Here is call stack with
[[1L]]applied:
So, what is this
cedta() and why is it making such decisions? Let us look how we get from subsetting a data table to a function deciding that our package is not data table aware. Examining the first rows of the body of
data.table:::[.data.table we can see that the subset method first examines the output of
cedta() and if its results is
FALSE, calls the data frame methods. This answers our question of why a data frame method was called:
if (!cedta()) { Nargs = nargs() - (!missing(drop)) ans = if (Nargs < 3L) { `[.data.frame`(x, i) } else if (missing(drop)) `[.data.frame`(x, i, j) else `[.data.frame`(x, i, j, drop) if (!missing(i) & is.data.table(ans)) setkey(ans, NULL) return(ans) }
Now looking into
data.table:::cedta() itself we see that in case
topenv(parent.frame(n)) is not a namespace,
cedta() happily returns
TRUE. This explains why our function worked when it was defined and run from the global environment. However, in case we are in the context of a namespace, our namespace must satisfy at least one of eight conditions:
ans = nsname == "data.table" || "data.table" %chin% names(getNamespaceImports(ns)) || (nsname == "utils" && exists( "debugger.look", parent.frame(n + 1L) )) || (nsname == "base" && all(c("FUN", "X") %chin% ls(parent.frame(n)))) || (nsname %chin% cedta.pkgEvalsUserCode && any( sapply(sys.calls(), function(x) is.name(x[[1L]]) && (x[[1L]] == "eval" || x[[1L]] == "evalq")) ) ) || nsname %chin% cedta.override || isTRUE(ns$.datatable.aware) || tryCatch( "data.table" %chin% get( ".Depends", paste("package", nsname, sep = ":"), inherits = FALSE ), error = function(e) FALSE )
Out of which the most relevant for us is:
"data.table" %chin% names(getNamespaceImports(ns))
When I first saw this, I was like (probably more than 50% of the sentence self-censored):
No way. I could not possibly be so stupid to forget to import data table in the NAMESPACE! (… of course I could)
So, about a minute later, place
@import data.table into the roxygen tags, regenerate the NAMESPACE, re-install the package and all works great.
How could I possibly fail to import anything from data.table and find out earlier?
I think the reason (apart from plain forgetting the obvious) is a combination of the following:
- the subsetting operator is such second nature, that it just did not occur to me to import it with the
@importFromtag and I rarely use
@importon entire packages
R CMD checkwas successful with no notes, warning or errors, again because even if I usually relatively strictly use qualified calls, the subsetting would seem very unnatural like that. There was therefore no mention of
data.table::in the entire code and the checking procedure had nothing to complain about
- the data table method actually did dispatch correctly, so only after a closer look we see the data frame method kicking in. The first thing to investigate (most of the time correctly) is the actual implementation of what is going on with the expressions inside the subsetting operator, especially when passing around and evaluating quoted expressions
So, if you ever see
cedta() making decisions about data table awareness, check your NAMESPACE. Maybe you have just missed the obvious as I did. Happy data tab.
|
https://www.r-bloggers.com/2019/02/verbose-data-table-and-uncovering-hidden-cedtas-data-table-awareness-decisions/
|
CC-MAIN-2021-43
|
refinedweb
| 1,159
| 58.08
|
ayhan
Member since Jan 24, 2009
- Profile: /members/2848-ayhan.htm
Recent Blog Comments By ayhan
Is Not A Four-Letter Word
Posted on Feb 19, 2015 at 9:21 PM... read more »
The User Experience (UX) Of Having One Computer Monitor
Posted on May 13, 2014 at 10:05 AM
@David, I am sure it is good tech. But higher resolution for what? Why spend so much? I am sure it will show some difference on TV or games. Here we are discussing desktop monitors which are used for work. If you are telling me 4K is much healthier; that would be another case and I will need to do s... read more »
The User Experience (UX) Of Having One Computer Monitor
Posted on May 12, 2014 at 1:19 PM
@Ben, 4K will not make any visible difference. today's new led screens (phones, laptops, monitors etc..) are already around 2K. You are developer you don't need more resolution you need more space. also you need to watch some companies such as samsung, lg and some others(this might not be case an... read more »
The User Experience (UX) Of Having One Computer Monitor
Posted on May 10, 2014 at 11:22 PM
I have been using double screen since that I thought I have invented :) last year i have switched to 29". that is only screen you can fit fully 2 webpages side by side. you can stick to single screen just try out 29". not cheap though!... read more »
Stripping XML Name Spaces And Node Prefixes From ColdFusion XML Data (To Simplify XPath)
Posted on Apr 1, 2014 at 1:44 PM
another way to strip namespacesm you can do with XSLT. step 1. save this as such as "removenamespace.xsl" <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet <x... read more »
What Chinese Food Has Taught Me About Life
Posted on Mar 26, 2009 at 12:01 PM
Ben, If you are enjoying chicken it is worth waiting. Things happen always. if you stuck with details life becomes painful. enjoy life without questioning... you are smart person and that is what you did on way to your home.... read more »
My 1,000th ColdFusion Post - What It Means To Me (And Free Prizes)
Posted on Jan 24, 2009 at 4:41 PM
congrat Ben. i love your blog style. keep up good work. I know you like JQuery and web 2.0 and you must have seen a lot. but this web site is kind of web 4.0 (just for fun) read more »
|
https://www.bennadel.com/members/2848-ayhan.htm
|
CC-MAIN-2022-21
|
refinedweb
| 431
| 83.46
|
Hi Marten,
Thansk for your repply. I managed to get the Sitemap using page tree. But i have one problem.
We are using menu list for top menu on the site. The top menu was rendering the pages from right to left. So, our tema have given the sort index in teh reverse order and now the top menu is working fine. But the pages in the Sietmap are getting in reverse order. Cna you please suggest a solution for this.
Many Thanks
One way could be to hook into the Filter event of the control and rearrange the pages in the collection (reverse the order). If I were you I would do this in the top menu rather than the sitemap since the top menu is what does not follow the "expected" behaviour. Also it's flat (not hierarchial) so it would be fewer pages and simpler logic.
Edit: I'm not sure if this will work, perhaps the order is set after the Filter event, but it's worth a try.
This is completely untested code. It also assumes that the order presented in the menulist is the order the pages are in the collection, i.e. that ordering is done before this event handler is called. In that lucky case you should just be able to create an eventhandler that looks something like the below and connect the Filter event of your MenuList to it (for example in markup by setting OnFilter="ReverseOrder"):
using System.Linq;
protected void ReverseOrder(object sender, EPiServer.Filters.FilterEventArgs e)
{
PageDataCollection reversed = (PageDataCollection)e.Pages.Reverse();
e.Pages.Clear();
foreach (PageData p in reversed)
{
e.Pages.Add(p);
}
}
Hi Magnus,
The above code worked with some midifications. Thanks for your favourable help. Below is the modified code.
Thanks Again.
PageDataCollection correctPageOrder = (PageDataCollection)e.Pages;
PageDataCollection reversePageOrder = new PageDataCollection(e.Pages.Count);
for (int i = e.Pages.Count-1; i >= 0; i--)
{
reversePageOrder.Add(correctPageOrder[i]);
}
e.Pages.Clear();
foreach (PageData p in reversePageOrder)
{
e.Pages.Add(p);
}
Hi All,
Can you please tell me how to generate the Site Map for the EPi server driven web site. My requirement is this particular site map should contain links to all the existing pages and for the pages to be created. I mean this should be dynamic enough.
Please get back to me on this as soon as possible.
Many thanks in advance.
|
https://world.episerver.com/forum/developer-forum/Developer-to-developer/Thread-Container/2009/9/Sitemap-for-the-site/
|
CC-MAIN-2020-50
|
refinedweb
| 399
| 68.36
|
batcher 0.1.0
batcher: ^0.1.0 copied to clipboard
Batcher #
A robust future batching solution.
Usage #
Installation #
Add the package to your
pubspec.yaml:
dependencies: ... batcher: ^0.1.0
And import it:
import 'package:batcher/batcher.dart';
Batching #
A
FutureBatcher takes an
Iterable containing generator functions. A
generator function creates a future for the batcher when it needs it. Say, for
example, you want to download a list of files in batches with
package:http:
// A predetermined list of file URIs. final fileUris = <Uri>[ /*...*/]; // Use a single client to speed up mass connections. final client = http.Client(); // Create a list of generators. final downloadFutureGenerators = [ for (final uri in fileUris) () => http.get(uri).then((response) => response.body), ];
Note this is not a list of futures; it's a list of anonymous functions that create futures. This allows a batcher to make the HTTP requests in batches.
Generators can be added to a batcher at any time. If any threads are available, they will start to be called right away; otherwise, they will stay in the queue until a thread is ready.
As well as requiring future generators, a thread count must also be specified. This thread count is the maximum number of futures that will be resolving at any time. This thread count can change at any time; the new number will be used after excess pending operations complete.
With these two things, futures can be batched in a few different ways:
Future-based batching
In this way, queued results are delivered via futures returned by
get functions.
// Create a batcher with 16 threads. final batcher = FutureBatcher<String>(16); // Add a single generator to the queue. final Future<String> future = batcher.get(() => http.get(uri).then((response) => response.body)); // Or, add multiple generators to the queue. final List<Future<String>> futures = batcher.getAll(generators); // The following can also be used to create a batcher and call getAll in one line. final futures = generators.batch(16);
Stream-based batching
If you need to get results from completed futures, but you don't care which result came from which generator, this is an appropriate method. It's more optimised than future-based batching for large amounts of work.
This method utilises the
StreamingFutureBatcher class.
Say, for example, you wish to pull a list of random fake names from an API,
with the
getRandomName function that returns a
Future<String>:
// Create a list of generators. final generators = List.filled(100, () => getRandomName()); // Create a streaming batcher from the generators with 16 threads. final batcher = StreamingFutureBatcher.from(generators, 16); // Or, use the extension function: final batcher = generators.streamBatch(16); // The results stream can be used like any other. await for (final name in batcher.results) { print(name); }
By default, the batcher will close down after all pending generators are
resolved. This can be changed with a constructor argument (or by using the
default constructor) to get behaviour that's closer to
FutureBatcher.
Note that
addAll is used here; while
getAll also works, it adds overhead by
dealing with return values that we don't need.
// Create a streaming batcher with 16 threads. final batcher = StreamingFutureBatcher(16); // Add the generators. batcher.addAll(generators); // Do stuff (like adding more)... ... // Close when done. batcher.close();
Simple batching
This is the way to go if you don't need to
await anything or remember which
results came from which generator.
This is useful, for example, in I/O operations like writing files to a cache service where you don't care if they finish completing:
// File paths and contents. final paths = <String>[/*...*/]; final contents = <String>[/*...*/]; // Map the file data to generators. final generators = List.generate( paths.length, (index) => () => File(paths[index]).writeAsString(contents[index]), ); // Create a batcher with 16 threads. final batcher = FutureBatcher<void>(16); // Add the generators. batcher.addAll(generators);
- Q: What can this be useful for?
A:
- Accelerating network requests (one at a time, you can be limited by server speeds; all at once with
Future.wait, you can be blocked by the platform or server)
- Filesystem I/O, where there are open file limits
- Q: How does this compare to
package:batching_future?
A: Both packages can do similar things, but they're made for different usecases.
batching_futureis great for unpredictable, rapid API calls, with caching support and time-based batching. This package is made for predictable, ongoing batches, taking fixed-sized lists of generators.
|
https://pub.dev/packages/batcher
|
CC-MAIN-2021-31
|
refinedweb
| 729
| 60.21
|
gc-analyze - Analyze Garbage Collector (GC) memory dumps
gc-analyze [OPTION] ... [file]
gc-analyze prints an analysis of a GC memory dump to standard out. The memory dumps may be created by calling "gnu.gcj.util.GCInfo.enumerate(String namePrefix)" from java code. A memory dump will be created on an out of memory condition if "gnu.gcj.util.GCInfo.setOOMDump(String namePrefix)" is called before the out of memory occurs. Running this program will create two files: TestDump001 and TestDump001.bytes. import gnu.gcj.util.*; import java.util.*; public class GCDumpTest { static public void main(String args[]) { ArrayList<String> l = new ArrayList<String>(1000); for (int i = 1; i < 1500; i++) { l.add("This is string #" + i); } GCInfo.enumerate("TestDump"); } } The memory dump may then be displayed by running: gc-analyze -v TestDump001
--verbose -v Verbose output. -p tool-prefix Prefix added to the names of the nm and readelf commands. -d directory Directory that contains the executable and shared libraries used when the dump was generated. --help Print a help message, then exit. --version Print version information, then exit.
Copyright (c) 2001, 2002, 2003,.
|
http://huge-man-linux.net/man1/gc-analyze-4.3.html
|
CC-MAIN-2018-05
|
refinedweb
| 186
| 52.26
|
the bottom of the screen and nothing in the middle. This sits on top of another UIView that has buttons in the middle (B). However, I cannot seem to interact with the buttons in the middle of B.
I can see the buttons in B - I've set the background of A to clearColor - but the buttons in B do not seem to receive touches despite the fact that there are no objects from A actually on top of those buttons.
EDIT - I still want to be able to interact with the objects in the top UIView
Surely there is a simple way of doing this?
You should create a UIView subclass for your top view and override the following method:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event { // UIView will be "transparent" for touch events if we return NO return (point.y < MIDDLE_Y1 || point.y > MIDDLE_Y2); }
You may also look at the hitTest:event: method.
|
https://codedump.io/share/paYZ1Jor1DQ6/1/allowing-interaction-with-a-uiview-under-another-uiview
|
CC-MAIN-2017-13
|
refinedweb
| 155
| 69.72
|
Both ESP8266-12E and ESP8266-07 have one ADC pin that is easily accessible. This means that those ESP8266 boards can read analog signals. In this tutorial we’ll show you how to use analog reading with the ESP8266 using Arduino IDE, MicroPython or Lua firmware.
As an example, we’ll show you how to read analog values from a potentiometer. This post is divided in three sections:
- ESP8266 Analog Read with Arduino IDE
- ESP8266 Analog Read with MicroPython
- ESP8266 Analog Read with Lua/NodeMCU
ESP8266 ADC Specifications
When referring to the ESP8266 ADC pin you will often hear these different terms interchangeably:
- ADC (Analog-to-digital Converter)
- TOUT
- Pin6
- A0
- Analog Pin 0
All these terms refer to the same pin in the ESP8266 that is highlighted in the next section.
ESP8266 ADC Resolution
The ADC pin has a 10-bit resolution, which means you’ll get values between 0 and 1024.
ESP8266 Input Voltage Range
The ESP8266 ADC pin input voltage range is 0 to 1V if you’re using the bare chip. However, most ESP8266 development boards come with an internal voltage divider, so the input range is 0 to 3.3V. So, in sumary:
- ADC Voltage range in ESP8266 development boards: 0 to 3.3V (for example: ESP8266 12-E NodeMCU Kit, WeMos D1 Mini, …)
- ADC Voltage range in ESP8266 chip: 0 to 1V (for example: ESP-07 chip, ESP-12E chip, …)
ESP8266 Analog Pin
With the ESP8266 12-E NodeMCU kit and other ESP8266 development boards, it is very easy to access the A0, you simply connect a jumper wire to the pin (see figure below).
If you’re using an ESP8266 chip, like the ESP8266-07, you need to solder a wire to that pin.
Parts Required
To show you how to use analog reading with the ESP8266, we’ll read the values from a potentiometer. For that, you need to wire a potentiometer to your board.
Here’s the hardware that you need to complete this tutorial:
- Recommended: ESP8266-12E NodeMCU Kit (read ESP8266 development boards comparison)
- Alternative: 1x ESP8266-07 chip or 1x ESP8266-12E chip + FTDI Programmer
- 100 Ohm Resistor (needed if you’re using a bare chip)
- 220 Ohm Resistor(needed if you’re using a bare chip)
- 1k Ohm Potentiometer
- Breadboard
- Jumper wires
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Schematic Diagram
If you’re using an ESP8266 development board, follow the next schematic diagram.
If you’re using an ESP8266 chip with input voltage range of 0V to 1V, you need to make sure that the input voltage on the A0 pin doesn’t exceed 1V. So, you need a voltage divider circuit, as shown below.
We’re using a 100 Ohm and a 220 Ohm resistor, so that the Vout is 1V.
Recommend reading: ESP8266 Pinout Reference: Which GPIO pins should you use?
1. ESP8266 Analog Read with Arduino IDE
This section shows how to read analog values with the ESP8266 using Arduino IDE.
Install ESP8266 in Arduino IDE
In order to upload code to your ESP8266, you need to install the ESP8266 add-on first, if you haven’t already. Follow the next tutorial:
Code
Copy the following code to your Arduino IDE.
/********* Rui Santos Complete project details at *********/ const int analogInPin = A0; // ESP8266 Analog Pin ADC0 = A0 int sensorValue = 0; // value read from the pot int outputValue = 0; // value to output to a PWM pin void setup() { // initialize serial communication at 115200 Serial.begin(115200); } void loop() { // read the analog in value sensorValue = analogRead(analogInPin); // map it to the range of the PWM out outputValue = map(sensorValue, 0, 1024, 0, 255); // print the readings in the Serial Monitor Serial.print("sensor = "); Serial.print(sensorValue); Serial.print("\t output = "); Serial.println(outputValue); delay(1000); }
The code starts by declaring the ESP8266 analog pin in the analogInPin variable:
const int analogInPin = A0; // ESP8266 Analog Pin ADC0 = A0
The potentiometer value will be stored on the sensorValue variable:
int sensorValue = 0; // value read from the pot
In the setup(), initialize the Serial Monitor for debugging purposes:
void setup() { // initialize serial communication at 115200 Serial.begin(115200); }
In the loop(), we read the analog value by using the analogRead() function and passing the analogInPin as an argument. The value is saved on the sensorValue variable:
sensorValue = analogRead(analogInPin);
Finally, the readings are displayed on the Serial Monitor, so that you can actually see what is going on.
Serial.print(sensorValue);
If you want to output PWM signals based on the input voltage of the ADC pin, you need to convert the ADC values to a range of 0 to 255. You need to do this because the PWM output only has 8-bit resolution.
outputValue = map(sensorValue, 0, 1024, 0, 255);
Uploading the Code
Upload the previous code to the ESP8266. Make sure you have the right board and COM port select. Go to Tools> Board and select the ESP8266 model you’re using. In our case, we’re using the ESP8266 12-E NodeMCU Kit.
Go to Tools > Port and select the COM port the ESP8266 is connected to.
Press the Arduino IDE upload button.
Note: if you’re using an ESP-07 or ESP-12E chip, you need an FTDI programmer to upload code.
Demonstration
After uploading the code, open the Serial Monitor at a baud rate of 115200. The analog readings should be displayed.
Rotate the potentiometer and see the values increasing or decreasing.
2. ESP8266 Analog Read with MicroPython
This section show how to read analog values with the ESP8266 using MicroPython firmware.
To follow this tutorial you need MicroPython firmware installed in your ESP8266 board.
Script – Analog Reading ESP82266
The following script for the ESP8266 reads analog values from A0 pin.
# Complete project details at from machine import Pin, ADC from time import sleep pot = ADC(0) while True: pot_value = pot.read() print(pot_value) sleep(0.1)
How the code works
To read analog inputs, import the ADC class in addition to the Pin class from the machine module. We also import the sleep method.
from machine import Pin, ADC from time import sleep
Then, create an ADC object called pot on A0 pin.
pot = ADC(0)
In the loop, read the pot value and save it in the pot_value variable. To read the value from the pot, use the read() method on the pot object.
pot_value = pot.read()
Then, print the pot_value.
print(pot_value)
At the end, add a delay of 100 ms.
sleep(0.1)
In summary:
- To read an analog value you use the ADC class;
- To create an ADC object simply call ADC(0).
- To read the analog value, use the read() method on the ADC object.
Demonstration
After uploading the code to the ESP8266 board using Thonny IDE or uPyCraft IDE, rotate the potentiometer.
Check the shell of your MicroPython IDE to read the values from the potentiometer.
3. ESP8266 Analog Read with Lua/NodeMCU
This section shows how use the NodeMCU firmware to read analog values with the ESP8266.
Flashing ESP8266 with Lua/NodeMCU firmware
First, you have to flash your ESPs with NodeMCU firmare.
- Open the ESPlorer
Testing the ADC Pin (A0)
To send commands with the ESPlorer IDE, you need to establish a serial communication with your ESP, follow these instructions:
- Connect your ESP-12E or FTDI programmer to your computer
- Set bad raute as 9600
- Select your ESP-12E or FTDI programmer port (COM3, for example)
- Press Open/Close
Then type the following command:
print(adc.read(0))
Click the button “Send” as shown below.
It should return a value between 0 and 1024. Rotate your potentiometer and send the print(adc.read(0)) command a few more times to get the potentiometer value.
When your potentiometer is near 0V it prints 0 and when it reaches 3.3V it should print 1024.
Wrapping Up
In this tutorial we’ve shown you how to read analog values using the ESP8266 analog pin (A0). One important thing to notice is that the ESP8266 analog input range is either 0-1V if you’re using a bare chip, or 0-3.3V if you’re using a development board.
Either way, you should always be careful not to exceed the maximum recommended voltage. You may consider adding a voltage divider circuit when you need a higher input voltage range.
We hope you’ve found this tutorial useful. If you’re just getting started with the ESP8266, we recommend the following resources:
- Home Automation Using ESP8266 (eBook)
- Getting Started with the ESP8266
- ESP8266 GPIO Reference Guide
- ESP8266 Web Server
Thanks for reading.
45 thoughts on “ESP8266 ADC – Read Analog Values with Arduino IDE, MicroPython and Lua”
Really a great job, well done. For me it was very useful.
Thank you!
Rui
I have been watching/reading your tutorials for a while with interest.
I am a little confused with how you have chosen the resistors because you have a parallel combination between the 1K Pot and the 100R (R2).
Also would it not make sense to have added a small value cap of around 100nF between ground and the output from the pot. Pots are notoriously noisey during use?
Hi Rui,
I have to agree with NealB. You do have a parallel resistance that you are not taking into account in the calculation.
The voltage divider should go between the output of the pot and ground. The pot’s fixed terminals should be between the 3.3V and ground. And the 220 Ohm resistor connects to the sweep of the pot. The common connection between the fixed resistors goes to the ADC pin. And the 100 Ohm to ground.
This increases the accuracy of your calculation and lowers the current drain since there is now 1K between 3.3 and ground and not 320 Ohms. Making for less than one third of the current drain.
All assuming very low current to the ADC of course or nothing much would work anyway.
The cap is also a great idea to reduce the noise. Good work Neal.
Thanks for your suggestions. You guys are 100% right and I thought I had the circuit like you guys said, but using Fritizing I’ve messed up the connections I had in real life.
I’ve updated the schematics!
Thanks again for taking the time to leave a comment Paul and NealB.
Have a great day!
I put the whole thing and I returned values min 19 max 60, where am I wrong?
sir i’v a question over here that is how we can send this ADC value from one esp8266 module to another esp8266 module ?
Here’s how you send values from one ESP to another:
I measured the output voltage of 0.25 volts and marks me
Ho risolto, sostituendo il trimmer, e funziona.
Ho poi installato sul ESP 07 lo sketch per il TA SCT 013 e benché ruoti il trimmer non mi restituisce i valori.
Scusami.
I solved it by replacing the trimmer, and it works.
I then installed the ESP 07 sketch for the TA 013 SCT and rotate the trimmer although I do not return values
Awesome! I’m glad it’s working now. Thank you for reading!
Good one Rui! I can see this being useful reading pressure transducers, and controlling with the outcome…
Thank you!,
Bi;; Miller
Thank you so much for your continuous support. I really appreciate!
For the Arduino IDE everything is very similar, but your send that command: analogRead(A0);,
Bill Miller
Hi Bill,
Thank you so much for your continuous support. I really appreciate!
For the Arduino IDE everything is very similar, but your send that command: analogRead(A0);
hello Rui why in all project i can not see code
Your tutorials are very helpful for me, I am learning a lot.
Thanks!
Rui
Your tutorials are very helpful for me, I am learning a lot.
Is there a way to read the input voltage on NodeMCU. I would like to be able to read the voltage level of the battery supplying Vin.
Yes, adc.readvdd33(). It’s in the docs.
Hi, really nice to see your website. It seems that ESP8266 is getting more and more popular! Good work!
I only have one comment to make, I am using a nodemcu board (v0.9, yeah, the wide one) and the measurable voltage at A0 is between 0V (well, ground gives a reading of “2”, close enough) and 3V. which is “1024”. I don’t know if this is a recent change or what, but it may help someone…
Thanks! You’re right, some modules are 3V and go to 1024 reading.
nodemcu boards already have a 3.3v divider on them. So input for A0 is 0-3.3v not 0-1v
Yup, but older versions (the first module I bought and tested) was 1v
FYI, The Adafruit HUZZAH ESP8266 is 0-1v input.
Hi Steve.
Thank you for that information.
It may vary depending of the ESP8266 development board you’re using.
Thanks for sharing.
Regards,
Sara
Hello, thanks for the great tutorial. Can I use pin A0 as an output? I want to dim an LED attached to A0.
You mentioned using Multimeter to check if it is 1 volt limit or 3.3 v limit, but did not show how.
So, I risked it and build the same circuit. Wrote a simple Arduino sketch to read the analog value.
I stopped around 600 and measured the voltage It was 1.99 volt So I assumed that my nodemcu is ok for 3.3v.. BTY the version number I have is old and it is is 0.9
can u make a video in english language?
Hi Mahmdul,
Unfortunately I don’t have any companion videos for this blog post… Thanks for asking,
Rui
I am trying to connect esp8266 with dfrobot ph sensor. My code is get uploaded bt on serial monitor it doesnot show any readings. I am not understanding why this is happening? Could you just help me out with that?
Hi.
I’m sorry, but I don’t have any project using a ph sensor :/
Hi Rui ,
I have a question because i see may somthing wrong
Your formular is : Vout = Vin*R2/(R1+R2)
But in your toturol –> R1 = 220ohm, R2=100ohm, right?
So, how you get 3.3V in your connect ?
Because, my opinion –> with that connect. We only get 1.56V at ADC output
Thanks, please advice me
Hi.
You only need that voltage divider if the maximum input voltage on the analog pin of the ESP8266 is 1V (some old versions only support up to 1V).
I need to update this blog post to make it clearer.
Thanks for noticing.
Regards,
Sara 🙂
There is an annoying “feature” of the NodeMCU and the pre-ADC divisor resistor network. The 100K/220K network produces a full count at 3.2v, not 3.3v. So at 3.2v the ADC is already seeing the full 1v. This results in a 3% loss of scale and ADC count is now effectively reduced from the promised 1024 bits down to 993 bits.
You need to take this into account when using a Steinhart-Hart formula to convert the thermistor resistance into Kelvin. Otherwise the formula does not work.
Again I thought of something I needed to do and sure enough – found it here.
Thanks Rui for your great information.
You’re welcome.
Regards,
Sara
Hi, and tanks for a nice web site.
I cannot see that the voltage divider will give the right value.
Let’s assume that the 1K pot is in the middle position. Without the
voltage regulator, there will be 500 ohms from both ground and 3.3V
to the output, giving a voltage of 1.65.
But you cannot add this other voltage regulator and assume that the
voltage is still 1.65.
The path trough the voltage regulator add a resistor of 320 ohm in parallel
to the 500 ohms of the ground side of the pot, gives an effective value of
1 / ( 1/500 + 1/320) = 195 ohms.
The output of the pot will now be 0.96 V and not 1.65.
Hence, the voltage divider will not divide the right voltage.
I believe that the resistors in the voltage divider will have to be significantly higher than the resistance in the pot for this to work.
Regards,
Stein Gulbrandsen
Parabéns! Rui e Sara,
Muito bom seu Poste sobre ADC no Nodemecu será muito útil para leitura de sensores analógicos .
Congratulations!
Carlos Bruni
Salvador Bahia Brasil
Muito obrigada por essas palavras.
Cumprimentos de Portugal.
Sara
Hello,
Thank you very much to share this nice code ! I have a problem. I try to use this code with a DHT22. On the charts it Always shows the same value for temperature (24.3°C), humidity (57%).
If I print the values in the serial monitor Inside the loop, the values changed and are true but not on the charts. It seems that the function of reading temperature, humidity and pressure are not executed.
Could you help me to fix this problem. Thank you very much !
Best regards
|
https://randomnerdtutorials.com/esp8266-adc-reading-analog-values-with-nodemcu/
|
CC-MAIN-2019-47
|
refinedweb
| 2,886
| 73.78
|
By Bruno Sonnino
Developers everywhere want to develop games. And why not? Games are among the best sellers in computer history, and the fortunes involved in the game business keep attracting developers to it. As a developer, I’d certainly like to be the one who develops the next Angry Birds* or Halo*.
In practice, however, game development is one of the most difficult areas of software development. You have to remember those trigonometry, geometry, and physics classes that you thought you’d never use. Besides that, your game must combine sound, video, and a story in a way that the user will want to play it more and more. And all that is before you write a single line of code!
To make things easier, frameworks are available for developing games using not only C and C++ but even C# or JavaScript* (yes, you can develop three-dimensional games for your browser using HTML5 and JavaScript).
One of these frameworks is Microsoft XNA*, which builds on Microsoft DirectX* technology, allowing you to create games for Xbox 360*, Windows*, and Windows Phone*. Microsoft has phased out XNA, but meanwhile, the open source community has introduced a new player: MonoGame*.
What Is MonoGame?
MonoGame is an open source implementation of the XNA application programming interface (API). It implements the XNA API not only for Windows but also for Mac* OS X*, Apple iOS*, Google Android*, Linux*, and Windows Phone. That means you can develop a game for all those platforms with only a few minor changes. That’s a wonderful feature: you can create games using C# that can be ported easily to all major desktop, tablet, and smartphone platforms. It’s a great start for those who want to conquer the world with their games.
Installing MonoGame on Windows
You don’t even need Windows to develop with MonoGame. You can use MonoDevelop* (an open source cross-platform integrated development environment [IDE] for Microsoft .NET languages) or Xamarin Studio*, a cross-platform IDE developed by Xamarin. With these IDEs, you can develop using C# on Linux or Mac.
If you are a Microsoft .NET developer and you use Microsoft Visual Studio* on a daily basis, as I do, you can install MonoGame in Visual Studio and use it to create your games. At the time of this writing, the latest stable version of MonoGame is version 3.2 This version runs in Visual Studio 2012 and 2013 and allows you to create a DirectX desktop game, which you will need if you want to support touch in the game.
The installation of MonoGame comes with many new templates in Visual Studio that you can choose to create your games, as shown in Figure 1.
Figure 1. New MonoGame* templates
Now, to create your first game, click MonoGame Windows Project and then select a name. Visual Studio creates a new project with all the files and references needed. If you run this project, you’ll get something like Figure 2.
Figure 2. Game created in a MonoGame* template
Dull, isn’t it? Just a blue screen, but this is the start for any game you build. Press Esc, and the window closes.
You can start writing your game with the project you have now, but there is a catch. You won’t be able to add any assets (images, sprites, sounds, or fonts) without compiling them to a format compatible with MonoGame. For that, you need one of these options:
- Install XNA Game Studio 4.0
- Install the Windows Phone 8 software development kit (SDK)
- Use an external program like XNA content compiler
XNA Game Studio
XNA Game Studio has everything you need to create XNA games for Windows and Xbox 360. It also has a content compiler that can compile the assets to .xnb files, then compile all the files for your MonoGame project. Currently, you can install the compiler only in Visual Studio 2010. If you don’t want to install Visual Studio 2010 just for that purpose, you can install XNA Game Studio in Visual Studio 2012 (see the link in the “For More Information” section of this article).
Windows Phone 8 SDK
You can’t install XNA Game Studio directly in Visual Studio 2012, but the Windows Phone 8 SDK installs fine in Visual Studio 2012. You can use it to create a project to compile your assets.
XNA Content Compiler
If you don’t want to install an SDK to compile your assets, you can use the XNA content compiler (see the link in “For More Information”), an open source program that can compile your assets to .xnb files that can be used in MonoGame.
Create Your First Game
The previous game that was created with the MonoGame template is the starting point for all games. You will use the same process to create all your games. In Program.cs, you have the Main function. That function initializes and runs the game:
static void Main() { using (var game = new Game1()) game.Run(); }
Game1.cs is the core of the game. There, you have two methods that are called in a loop 60 times per second: Update and Draw. In Update, you recalculate data for all the elements in the game; in Draw, you draw these elements. Note that this is a tight loop. You have 1/60th of a second, or 16.7 milliseconds, to calculate and draw the data. If you take more than that, the program may skip some Draw cycles, and you will see graphical glitches in your game.
Until recently, the input for games on desktop computers was the keyboard and mouse. Unless the user had purchased extra hardware, like driving wheels or joysticks, you could not assume that there was any other input method. With the new hardware now available, like Ultrabook™ devices, Ultrabook 2 in 1s, and all-in-one PCs, your options have changed. You can use touch input and sensors, giving users a more immersive and realistic game.
For this first game, we will create a penalty shootout soccer game. The user will use touch to “kick” the ball, and the computer goalkeeper will try to catch the ball. The direction and speed of the ball will be determined by the user’s flick. The computer goalkeeper will choose a random side and velocity to catch the ball. Each goal scored results in one point. Otherwise, the goalkeeper gets the point.
Add Content to the Game
The first step in the game is to add content. Start by adding the background field and the ball. To do so, create two .png files: one for the soccer field (Figure 3) and the other for the ball (Figure 4).
Figure 3. The soccer field
Figure 4. The soccer ball
To use these files in the game, you must compile them. If you are using XNA Game Studio or the Windows Phone 8 SDK, you must create an XNA content project. That project doesn’t need to be in the same solution. You’ll use it only to compile the assets. Add the images to this project, and build it. Then, go to the project target directory and copy the resulting .xnb files to your project.
I prefer to use the XNA Content Compiler, which doesn’t require a new project and allows you to compile the assets as needed. Simply open the program, add the files to the list, select the output directory, and click Compile. The .xnb files are ready to be added to the project.
Content Files
When the .xnb files are available, add them to the Content folder of your game. You must set the build action for each file as Content and the Copy to Output Directory to Copy if Newer. If you don’t do that, you will get an error when you try to load the assets.
Create two fields in which to store the textures of the ball and the field:
private Texture2D _backgroundTexture; private Texture2D _ballTexture;
These fields"); }
Note that the names of the textures are the same as the files in the Content folder but without the extension.
Next, draw the textures in the Draw method:
protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Green); // Set the position for the background var screenWidth = Window.ClientBounds.Width; var screenHeight = Window.ClientBounds.Height; var rectangle = new Rectangle(0, 0, screenWidth, screenHeight); // Begin a sprite batch _spriteBatch.Begin(); // Draw the background _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White); // Draw the ball var initialBallPositionX = screenWidth / 2; var ínitialBallPositionY = (int)(screenHeight * 0.8); var ballDimension = (screenWidth > screenHeight) ? (int)(screenWidth * 0.02) : (int)(screenHeight * 0.035); var ballRectangle = new Rectangle(initialBallPositionX, ínitialBallPositionY, ballDimension, ballDimension); _spriteBatch.Draw(_ballTexture, ballRectangle, Color.White); // End the sprite batch _spriteBatch.End(); base.Draw(gameTime); }
This method clears the screen with a green color, and then it draws the background and the ball at the penalty mark. The first
spriteBatch Draw method draws the background resized to the size of the window—position 0,0; the second method draws the ball at the penalty mark. It is resized to be proportional to the window size. There is no movement here because the positions don’t change. The next step is to move the ball.
Move the Ball
To move the ball, you must recalculate its position for each iteration in the loop and draw it in the new position. Perform the calculation of the new position in the
Update method:
protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape)) Exit(); // TODO: Add your update logic here _ballPosition -= 3; _ballRectangle.Y = _ballPosition; base.Update(gameTime); }
The ball position is updated in every loop by subtracting three pixels. If you want to make the ball move faster, you must subtract more pixels. The variables
_screenWidth,
_screenHeight,
_backgroundRectangle,
_ballRectangle, and
_ballPosition are private fields, initialized in the
ResetWindowSize method:
private void ResetWindowSize() { _screenWidth = Window.ClientBounds.Width; _screenHeight = Window.ClientBounds.Height; _backgroundRectangle = new Rectangle(0, 0, _screenWidth, _screenHeight); _initialBallPosition = new Vector2(_screenWidth / 2.0f, _screenHeight * 0.8f); var ballDimension = (_screenWidth > _screenHeight) ? (int)(_screenWidth * 0.02) : (int)(_screenHeight * 0.035); _ballPosition = (int)_initialBallPosition.Y; _ballRectangle = new Rectangle((int)_initialBallPosition.X, (int)_initialBallPosition.Y, ballDimension, ballDimension); }
This method resets all variables that depend on the window size. It is called in the
Initialize method:
protected override void Initialize() { // TODO: Add your initialization logic here ResetWindowSize(); Window.ClientSizeChanged += (s, e) => ResetWindowSize(); base.Initialize(); }
This method is called in two different places: at the beginning of the process and every time the window size changes.
Initialize handles
ClientSizeChanged, so when the window size changes, the variables that depend on the window size are reevaluated and the ball is repositioned to the penalty position.
If you run the program, you will see that the ball moves in a straight line but doesn’t stop when the field ends. You can reposition the ball when it reaches the goal with the following code:
protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape)) Exit(); // TODO: Add your update logic here _ballPosition -= 3; if (_ballPosition < _goalLinePosition) _ballPosition = (int)_initialBallPosition.Y; _ballRectangle.Y = _ballPosition; base.Update(gameTime); }
The
_goalLinePosition variable is another field, initialized in the
ResetWindowSize method:
_goalLinePosition = _screenHeight * 0.05;
You must make one other change in the
Draw method: remove all the calculation code.
protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Green); var rectangle = new Rectangle(0, 0, _screenWidth, _screenHeight); // Begin a sprite batch _spriteBatch.Begin(); // Draw the background _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White); // Draw the ball _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White); // End the sprite batch _spriteBatch.End(); base.Draw(gameTime); }
The movement is perpendicular to the goal. If you want the ball to move at an angle, create a
_ballPositionX field and increment it (to move to the right) or decrement it (to move to the left). A better way is to use a
Vector2 for the ball position, like this:
protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape)) Exit(); // TODO: Add your update logic here _ballPosition.X -= 0.5f; _ballPosition.Y -= 3; if (_ballPosition.Y < _goalLinePosition) _ballPosition = new Vector2(_initialBallPosition.X,_initialBallPosition.Y); _ballRectangle.X = (int)_ballPosition.X; _ballRectangle.Y = (int)_ballPosition.Y; base.Update(gameTime); }
If you run the program, it will show the ball moving at an angle (Figure 5). The next step is to make the ball move when the user flicks it.
Figure 5. Game with the ball moving
Touch and Gestures
In this game, the motion of the ball must start with a touch flick. This flick determines the direction and velocity of the ball.
In MonoGame, you can get touch input using the
TouchScreen class. You can use the raw input data or the Gestures API. The raw input data is more flexible because you can process all input data the way you want, while the Gestures API transforms this raw data into filtered gestures so that you receive input only for the gestures you want.
Although the Gestures API is easier to use, there are some cases when it can’t be used. For example, if you want to detect a special gesture, like an X shape or multifinger gestures, you will need to use the raw data.
For this game, we only need the flick, and the Gestures API supports that, so we will use it. The first thing to do is indicate which gestures you want by using the
TouchPanel class. For example, the code:
TouchPanel.EnabledGestures = GestureType.Flick | GestureType.FreeDrag;
. . . makes MonoGame detect and notify you of flicks and drags only. Then, in the
Update method, you can process the gestures as follows:
if (TouchPanel.IsGestureAvailable) { // Read the next gesture GestureSample gesture = TouchPanel.ReadGesture(); if (gesture.GestureType == GestureType.Flick) { … } }
First, determine whether any gesture is available. If so, you can call
ReadGesture to get and process it.
Initiate Movement with Touch
First, enable flick gestures in the game using the Initialize method:
protected override void Initialize() { // TODO: Add your initialization logic here ResetWindowSize(); Window.ClientSizeChanged += (s, e) => ResetWindowSize(); TouchPanel.EnabledGestures = GestureType.Flick; base.Initialize(); }
Until now, the ball has kept moving while the game was running. Use a private field,
_isBallMoving, to tell the game when the ball is moving. In the Update method, when the program detects a flick, you set
_isBallMoving to True, and the movement starts. When the ball reaches the goal line, set
_isBallMoving to False and reset the ball position:
protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape)) Exit(); // TODO: Add your update logic here if (!_isBallMoving && TouchPanel.IsGestureAvailable) { // Read the next gesture GestureSample gesture = TouchPanel.ReadGesture(); if (gesture.GestureType == GestureType.Flick) { _isBallMoving = true; _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f; } } if (_isBallMoving) { _ballPosition += _ballVelocity; // reached goal line if (_ballPosition.Y < _goalLinePosition) { _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y); _isBallMoving = false; while (TouchPanel.IsGestureAvailable) TouchPanel.ReadGesture(); } _ballRectangle.X = (int) _ballPosition.X; _ballRectangle.Y = (int) _ballPosition.Y; } base.Update(gameTime); }
The ball increment is no longer constant: the program uses the
_ballVelocity field to set the ball velocity in the x and y directions.
Gesture.Delta returns the variation of movement since the last update. To calculate the velocity of the flick, multiply this vector by the
TargetElapsedTime property.
If the ball is moving, the
_ballPosition vector is incremented by the velocity (in pixels per frame) until the ball reaches the goal line. The following code:
_isBallMoving = false; while (TouchPanel.IsGestureAvailable) TouchPanel.ReadGesture();
. . . does two things: it stops the ball and removes all gestures for the input queue. If you don’t do that, the user can flick while the ball is moving, making it move again after the ball has stopped.
When you run the game, you can flick the ball, and it will move in the direction you flicked with the speed of the flick. There is one catch here, however. The code doesn’t detect where the flick occurred. You can flick anywhere on the screen (not just in the ball), and the ball will start moving. You could use
gesture.Position to detect the position of the flick, but that property always returns 0,0, so you shouldn’t use it.
The solution is to use the raw input, get the touch point, and see if it is near the ball. The following code determines whether the touch input hits the ball. If it does, the gesture sets the
_isBallHit field:
TouchCollection touches = TouchPanel.GetState();
TouchCollection touches = TouchPanel.GetState(); if (touches.Count > 0 && touches[0].State == TouchLocationState.Pressed) { var touchPoint = new Point((int)touches[0].Position.X, (int)touches[0].Position.Y); var hitRectangle = new Rectangle((int)_ballPositionX, (int)_ballPositionY, _ballTexture.Width, _ballTexture.Height); hitRectangle.Inflate(20,20); _isBallHit = hitRectangle.Contains(touchPoint); }
Then, the movement starts only if the
_isBallHit field is True:
if (TouchPanel.IsGestureAvailable && _isBallHit)
If you run the game, you will only be able to move the ball if the flick starts on it. There is still one issue here, though: if you hit the ball too slowly or in any direction where it won’t hit the goal line, the game ends because the ball will never return to the start position. You must set a timeout for the ball movement. When the timeout is reached, the game repositions the ball.
The Update method has one parameter:
gameTime. If you store the
gameTime value when the movement starts, you can know the actual time the ball is moving and reset the game after the timeout:
if (gesture.GestureType == GestureType.Flick) { _isBallMoving = true; _isBallHit = false; _startMovement = gameTime.TotalGameTime; _ballVelocity = gesture.Delta*(float) TargetElapsedTime.TotalSeconds/5.0f; } ... var timeInMovement = (gameTime.TotalGameTime - _startMovement).TotalSeconds; // reached goal line or timeout if (_ballPosition.Y <' _goalLinePosition || timeInMovement > 5.0) { _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y); _isBallMoving = false; _isBallHit = false; while (TouchPanel.IsGestureAvailable) TouchPanel.ReadGesture(); }
Add a Goalkeeper
The game is now working, but it needs an element of difficulty: you must add a goalkeeper who will keep moving after the user kicks the ball. The goalkeeper is a .png file that the XNA Content Compiler compiles (Figure 6). You must add this compiled file to the Content folder, set its build action to Content, and set Copy to Output Directory to Copy if Newer.
Figure 6. The goalkeeper
The goalkeeper"); _goalkeeperTexture = Content.Load<Texture2D>("Goalkeeper"); }
Then, you must draw it in the
Draw method:
protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Green); // Begin a sprite batch _spriteBatch.Begin(); // Draw the background _spriteBatch.Draw(_backgroundTexture, _backgroundRectangle, Color.White); // Draw the ball _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White); // Draw the goalkeeper _spriteBatch.Draw(_goalkeeperTexture, _goalkeeperRectangle, Color.White); // End the sprite batch _spriteBatch.End(); base.Draw(gameTime); }
_goalkeeperRectangle has the rectangle of the goalkeeper in the window. It is changed in the Update method:
protected override void Update(GameTime gameTime) { … _ballRectangle.X = (int) _ballPosition.X; _ballRectangle.Y = (int) _ballPosition.Y; _goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY, _goalKeeperWidth, _goalKeeperHeight); base.Update(gameTime); }
The
_goalkeeperPositionY,
_goalKeeperWidth, and
_goalKeeperHeight fields are updated in the
ResetWindowSize method:
private void ResetWindowSize() { … _goalkeeperPositionY = (int) (_screenHeight*0.12); _goalKeeperWidth = (int)(_screenWidth * 0.05); _goalKeeperHeight = (int)(_screenWidth * 0.005); }
The initial goalkeeper position is in the center of the screen, at the top near the goal line:
_goalkeeperPositionX = (_screenWidth - _goalKeeperWidth)/2;
The goalkeeper will start moving when the ball does. It will keep moving from one side to the other in a harmonic motion. This sine curve describes its movement:
X = A * sin(at + δ)
where A is the movement amplitude (the goal width), t is the time of the movement, and a and δ are random coefficients (this will make the movement somewhat random so the user can’t predict the speed and side that the goalkeeper will take).
The coefficients are calculated when the user kicks the ball with a flick:
if (gesture.GestureType == GestureType.Flick) { _isBallMoving = true; _isBallHit = false; _startMovement = gameTime.TotalGameTime; _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f; var rnd = new Random(); _aCoef = rnd.NextDouble() * 0.005; _deltaCoef = rnd.NextDouble() * Math.PI / 2; }
The a coefficient is the velocity of the goalkeeper, a number between 0 and 0.005 that represents a velocity between 0 and 0.3 pixels/seconds (maximum of 0.005 pixels in 1/60th of a second). The delta coefficient is a number between 0 and pi/2. When the ball is moving, you update the goalkeeper’s position:
if (_isBallMoving) { _ballPositionX += _ballVelocity.X; _ballPositionY += _ballVelocity.Y; _goalkeeperPositionX = (int)((_screenWidth * 0.11) * Math.Sin(_aCoef * gameTime.TotalGameTime.TotalMilliseconds + _deltaCoef) + (_screenWidth * 0.75) / 2.0 + _screenWidth * 0.11); … }
The amplitude of the movement is
_screenWidth * 0.11 (the size of the goal). Add (
_screenWidth * 0.75) / 2.0 +
_screenWidth * 0.11 to the result, so that the goalkeeper moves in front of the goal. Now, it’s time to make the goalkeeper catch the ball.
Hit Testing
If you want to know whether the goalkeeper catches the ball, you have to know whether the ball rectangle intersects the goalkeeper’s rectangle. You do this in the
Update method, after you calculate the two rectangles:
_ballRectangle.X = (int)_ballPosition.X; _ballRectangle.Y = (int)_ballPosition.Y; _goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY, _goalKeeperWidth, _goalKeeperHeight); if (_goalkeeperRectangle.Intersects(_ballRectangle)) { ResetGame(); }
ResetGame is just a refactoring of the code to reset the game to its initial state:
private void ResetGame() { _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y); _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2; _isBallMoving = false; _isBallHit = false; while (TouchPanel.IsGestureAvailable) TouchPanel.ReadGesture(); }
With this simple code, the game knows whether the goalkeeper caught the ball. Now, you must know whether the ball hit the goal. You do this when the ball passes the goal line.
var isTimeout = timeInMovement > 5.0; if (_ballPosition.Y < _goalLinePosition || isTimeout) { bool isGoal = !isTimeout && (_ballPosition.X > _screenWidth * 0.375) && (_ballPosition.X < _screenWidth * 0.623); ResetGame(); }
The ball must be completely in the goal, so its position must start after the first goal post (
_screenWidth * 0.375) and must end before the second goal post (
_screenWidth * 0.625 −
_screenWidth * 0.02). Now it’s time to update the game score.
Add Scorekeeping
To add scorekeeping to the game, you must add a new asset: a spritefont with the font used in the game. A spritefont is an .xml file describing the font—the font family, its size and weight, along with some other properties. In the game, you can use a spritefont like this:
<?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns: <Asset Type="Graphics:FontDescription"> <FontName>Segoe UI</FontName> <Size>24</Size> <Spacing>0</Spacing> <UseKerning>false</UseKerning> <Style>Regular</Style> <CharacterRegions> <CharacterRegion> <Start> </Star> <End></End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent>
You must compile this .xml file with XNA Content Compiler and add the resulting .xnb file to the Content folder of the project; set its build action to Content and the Copy to Output Directory to Copy if Newer. The font is loaded in the
LoadContent method:
_soccerFont = Content.Load<SpriteFont>("SoccerFont");
In
ResetWindowSize, reset the position of the score:
var scoreSize = _soccerFont.MeasureString(_scoreText); _scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0);
To keep score, declare two variables:
_userScore and
_computerScore. The
_userScore variable is incremented when a goal occurs, and
_computerScore is incremented when the ball goes out, there is a timeout, or the goalkeeper catches the ball:
if (_ballPosition.Y < _goalLinePosition || isTimeout) { bool isGoal = !isTimeout && (_ballPosition.X > _screenWidth * 0.375) && (_ballPosition.X < _screenWidth * 0.623); if (isGoal) _userScore++; else _computerScore++; ResetGame(); } … if (_goalkeeperRectangle.Intersects(_ballRectangle)) { _computerScore++; ResetGame(); }
ResetGame re-creates the score text and sets its position:
private void ResetGame() { _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y); _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2; _isBallMoving = false; _isBallHit = false; _scoreText = string.Format("{0} x {1}", _userScore, _computerScore); var scoreSize = _soccerFont.MeasureString(_scoreText); _scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0); while (TouchPanel.IsGestureAvailable) TouchPanel.ReadGesture(); }
The
_soccerFont.MeasureString measures the string using the selected font, and you will use that measurement to calculate the score position. The score will be drawn in the Draw method:
protected override void Draw(GameTime gameTime) { … // Draw the score _spriteBatch.DrawString(_soccerFont, _scoreText, new Vector2(_scorePosition, _screenHeight * 0.9f), Color.White); // End the sprite batch _spriteBatch.End(); base.Draw(gameTime); }
Turn On the Stadium Lights
As a final touch, the game turns on the stadium lights when the light level in the room is dim. The new Ultrabook and 2 in 1 devices usually have a light sensor that you can employ to determine how much light is in the room and change the way the background is drawn.
For desktop applications, you can use the Windows API Code Pack for Microsoft .NET Framework, a library from which you access features of the Windows 7 and later operating systems. However, for this game, let’s take another path: the WinRT Sensor APIs. Although written for Windows 8, these APIs are also available for desktop applications and can be used with no change. Using them, you can port your application to Windows 8 without changing a single line of code.
The Intel® Developer Zone (IDZ) has an article on how to use the WinRT APIs in a desktop application (see the “For More Information” section). Based on that information, you must select the project in the Solution Explorer, right-click it, and then click Unload Project. Then, right-click the project again, and click Edit project. In the first
PropertyGroup, add a
TargetPlatFormVersion tag:
<PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> … <FileAlignment>512</FileAlignmen> <TargetPlatformVersion>8.0</TargetPlatformVersion> </PropertyGroup>
Right-click the project again, and then click Reload Project. Visual Studio reloads the project. When you add a new reference to the project, you will be able to see the Windows tab in the Reference Manager, as shown in Figure 7.
Figure 7. The Windows* tab in Reference Manager
Add the Windows reference to the project. You will also need to add the
System.Runtime.WindowsRuntime.dll reference. If you can’t find it in the list of assemblies, you can browse to the
.Net Assemblies folder. On my machine, the path is in
C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETCore\v4.5.
Now, you can write code to detect the light sensor:
LightSensor light = LightSensor.GetDefault(); if (light != null) {
If there is a light sensor, the
GetDefault method returns a not-null variable that you can use to detect light variations. Do that by wiring the
ReadingChanged event, like this:
LightSensor light = LightSensor.GetDefault(); if (light != null) { light.ReportInterval = 0; light.ReadingChanged += (s,e) => _lightsOn = e.Reading.IlluminanceInLux < 10; }
If the reading is less than 10, the variable
_lightsOn is True, and you can use it to draw the background in a different manner. If you see the
Draw method of
spriteBatch, you will see that the third parameter is a color. Up to this point, you have only used white. This color is used to tint the bitmap. If you use white, the colors in the bitmap remain unchanged; if you use black, the bitmap will be all black. Any other color tints the bitmap. You can use the color to turn on the lights, using a green color when the lights are off and the white color when they are on. In the
Draw method, change the drawing of the background:
_spriteBatch.Draw(_backgroundTexture, rectangle, _lightsOn ? Color.White : Color.Green);
Now, when you run the program, you will see a dark green background when the lights are off and a light green background when the lights are on (Figure 8).
Figure 8. The complete game
You have now a complete game. It’s by no means finished—it still needs a lot of polish (animations when there is a goal, ball bounces when the goalkeeper catches the ball or the ball hits the posts)—but I leave that as homework for you. The final step is to port the game to Windows 8.
Port the Game to Windows 8
Porting a MonoGame game to other platforms is easy. You just need to create a new project in the solution of type MonoGame Windows Store Project, then delete the
Game1.cs file and add the four .xnb files in the
Content folder of the Windows Desktop app to the Content folder of the new project. You won’t add new copies of the files but instead add links to the original files. In the Solution Explorer, right-click the
Content folder, click Add/Existing Files, select the four .xnb files in the Desktop project, click the down arrow next to the Add button, and select Add as link. Visual Studio adds the four links.
Then, add the
Game1.cs file from the old project to the new one. Repeat the procedure you used with the .xnb files: right-click the project, click Add/Existing Files, select the Game1.cs file from the other project folder, click the down arrow next to the Add button, and then click Add as link. The last change to make is in
Program.cs, where you must change the namespace for the
Game1 class because you are using the
Game1 class from the desktop project.
That’s it—you have created a game for Windows 8!
Conclusion
Developing games is a difficult task in its own right. You will have to remember your geometry, trigonometry, and physics classes and apply all those concepts to developing the game (wouldn’t it be nice if teachers used games when they taught these subjects?).
MonoGame makes this task a bit easier. You don’t have to deal with DirectX, you can use C# to develop your games, and you have full access to the hardware. Touch, sound, and sensors are available for your games. In addition, you can develop a game and port it with minor changes to Windows 8, Windows Phone, Mac OS X, iOS, or Android. That’s a real bonus when you want to develop multiplatform games.
For More Information
- Visit the MonoDevelop web site at
- Visit the Xamarin web site for Xamarin Studio at
- Download MonoGame from
- Download XNA Game Studio at
- Learn more about installing XNA on Windows 8 with Visual Studio 2012 at
- Download the Windows Phone 8 SDK from
- Get the XNA Content Compiler from
- Get the spritefont XML schema from
- The Windows API Code Pack for Microsoft .NET Framework is available from.
- See “Using Windows 8* WinRT API from desktop applications” at. web sites.
Add a CommentTop
(For technical discussions visit our developer forums. For site or software product issues contact support.)Please sign in to add a comment. Not a member? Join today
|
https://software.intel.com/en-us/articles/developing-games-with-monogame
|
CC-MAIN-2017-09
|
refinedweb
| 5,112
| 58.58
|
The NetBeans project provides a rich set of APIs that is being developed as part of work of various groups. By distributing development of the APIs to multiple people we can increase the amount of designed, reviewed, implemented, tuned and maintained contracts but as each module is developed by a different person, there is a challenge in providing APIs of the same quality, designed by using the same patterns and documented in a common way. This document describes the practices that should be used during the API development, so NetBeans can produce APIs that will stand the test of time and API Architecture Questions ] as main entry point. The questions provide guidance to the module owner and help him investigate architecture of his own module. By answering them the owner is supposed to realize and discover various aspects that others might depend on and remove them or document them. Based on the detailed answers (especially the <api/> tag) we generate overview tables like the one shown below that are incorporated into the Javadoc.
<api/>
Under Development
api.html
Official
#lookup
package-frame.html
table with definitions
If you write a NetBeans module you may want to setup the right layout of files first. The default infrastructure (nbbuild/templates/common.xml and nbbuild/templates/projectized.xml) let you do various tweaks, but usually it is easier to just use the expected default layout (currently described in README). The documentation related files shall be organized as follows:
module_dir/src/ - directory with your sources
module_dir/src/**/package.html - description of each package
module_dir/src/**/doc-files/ - directory for special javadoc files
module_dir/arch.xml - answers to architecture questions (see bellow)
module_dir/apichanges.xml - description of the history of changes
module_dir/nbproject/project.xml - project file with dependencies and other informations
The locations of arch.xml and apichanges.xml moreover has to be specified in nbproject/project.properties as javadoc.arch=${basedir}/arch.xml and javadoc.apichanges=${basedir}/apichanges.xml.
javadoc.arch=${basedir}/arch.xml
javadoc.apichanges=${basedir}/apichanges.xml
Open your project in NetBeans and select Generate Architecture Description from a context menu in projects tab. An empty, skeleton file will be generated. You can always reinvoke this target, if your answers are old, unanswered questions will be generated to the end of the file (the file shall stay well formated). When editing the file you can use HTML tags.Some of the answers may have autogenerated default answers (currently arch-where and dep-nb) for cases where the information is already recorded anywhere else (for example in project.xml file). They may or may not be accurate. You can accept the generated answer while surrounding it with your additional comments or you can suppress it. Just include <defaultanswer generate='here' /> or <defaultanswer generate='none' /> in the answer of for your question. If you do not use the <defaultanswer/> element at all the default answer is apended to your own answer. If you generate the defaultanswer, the source code for it is put into the comments in the html file, so if you are not satisified with the defaults, you can easily copy the output modify it and <defaultanswer generate='none' />.
<defaultanswer generate='here' />
<defaultanswer generate='none' />
<defaultanswer/>
This tag is one of the most important in the architecture file. Each use of the <api> tags generates new item into a table of API interfaces. This is the main entry point to the documentation, so use the tag a lot. Not just for a description of javadoc interfaces, but for everything. Remember that an api is ://...">
Possibly some additional description to the API which may be skipped.
</api>
The interfaces in the table are grouped by the group of the API and marked in the HTML text as <a name="group-name" /> so a reference to these tables can be made by using <a href="#group-name" />..
Important part of documentation is the description of context. It is not enough to say: find this interface in lookup. The reader may not know what lookup is, so it is better to hyperlink to its definition. You can use regular <a href> tag to link to other documents, for root of your javadoc use @TOP@. So link to lookup would be @TOP@/org/openide/util/Lookup.html.
@TOP@/org/openide/util/Lookup.html
Consider making the prose section part of package.html file. Then you can use @{link classname} to address any class of your module or from modules you depend on.
package.html
The context is often split between multiple modules. To allow links between them, the root of each module javadoc can be referred to as @org-netbeans-the-module-code-base-name@. So to link to lookup from another module one can use @org-openide-util@/org/openide/util/Lookup.html (the list of all currently known module name substitutions is available at nbbuild/javadoctools/replaces.xml).
@org-openide-util@/org/openide/util/Lookup.html
Please remove as much as possible of usages of non-relative links like and replace them with @TOP@, @org-netbeans-module-name@ or @JDK@ root points. The javadoc is being scanned for allowed and disallowed links (defined in nbbuild/javadoctools/disallowed-links.xml) and violations cause the build run from IDE to fail. It is generally not recommended to refer to NetBeans website as the documentation shall be self contained, but if you find a URL that makes sence, feel free to add it to the nbbuild/javadoctools/disallowed-links.xml yourself. Btw. it seems better to use such pseudo root point than directly relative link as for example content of package.html is usually duplicated into more directories. ...
Make sure the documentation format is correct (links point to valid places, XML files has valid syntax, etc.). This can be checked by invoking Generate Javadoc from the context menu. This builds the Javadoc and (in addition to invoking ant javadoc from command line) also checks for broken links and fails if there any - so make sure all Javadoc of modules you are referring to has already been generated.
ant javadoc
All NetBeans project Javadoc sets are being daily regenerated and uploaded to the central NetBeans API List. When your module can successfully build javadoc as described in previous section, it is time to consider adding it to the API list as well.
To add it, you have to modify nbbuild/build.properties and add own module into config.fixedmodules.javadoc property. Use
config.fixedmodules.javadoc
ant -f nbbuild/build.xml check-module-configs
cvs -q diff nbbuild
to review your changes. Then verify that everything works correctly by rebuilding all Javadoc:
ant -f nbbuild/build.xml build-javadoc
and if the build succeeds and really contains your module, prepare for committing your changes into CVS (check in the new moduleconfigs.txt too). Please note that three files in nbbuild/javadoctools shall be modified by addition of references to your module root. Verify that the additions are sane (e.g. contain no local references and look like the other lines in the files) and then commit the modified nbbuild/build.properties, nbbuild/javadoctools/replaces.xml, nbbuild/javadoctools/links.xml and nbbuild/javadoctools/properties.xml.
moduleconfigs.txt
nbbuild/javadoctools
By default the basic overview page is generated based on content of your arch.xml, apichanges.xml and project.xml. To see an example, check the overview page of component palette api.
This page has following structure:
arch.xml
arch-overall
apichanges.xml
arch-usecases
arch-where
deploy-dependencies
XXX need to describe: {@link ...}, what package.html can and cannot do, @inheritDoc, etc.
@inheritDoc
In order to ensure good enough quality of produced APIs there is a service provided to module writers - they can ask for an [reviews API review].
It is required that every new API will be reviewed prior to integration into trunk. The exception from this rule is a friend API that is used only by modules within the same cluster (the module has to explicitly list its friends). In this case the review is recommended but not required.
The short answer to question why you should be interested in an architecture review is because it will be useful. Useful to you, as you discover new possible solutions to your problems or mistakes in your design, that might appear later, when integrated together with the whole system or even in later versions, when problems with maintainability and extensibility can show up. It will be useful to the whole system as it will be composed from more stable components integrated in better ways. It will be useful to whole your project as it will get better.
Nobody knows everything, but there is a lot of knowledge spread around. Architecture review is a way to get the people with pieces of knowledge together and cooperate in preventing us from repeating known mistakes and solving problems in the wrong way.
Ask for advice through architecture review. It cannot hurt and it is likely going to be useful.
It is unlikely that the review team will do some coding for you. It is also unlikely that the reviewers are going to become domain experts and help you understand your users or your requirements. This is your task and you have to prepare these materials for the reviewers, as it is very likely they will ask you about these questions in order to verify that your way of solving problems of your users is really the right one.
As a result of architecture review you can expect advices and help in identification of
More or less expect just a high level help.
Whenever you need architecture advice or clarification and because the charter of the team is mostly high level, it is reasonable to come for the initial opinion as soon as the architecture is visible so it can be reviewed. This usually means after answering the first (more general) set of the [questions.html architecture questions] which should be done before the actual start of implementation. At this point the high level advices are of some use, later it is always hard to change implementation that has been written.
Of course things are likely change during implementation, but the high level direction given during this inception stage are likely not going to be questioned then and only the newly discovered facts and differences from the original suggestions are going to be evaluated during the before-commit review.
For details about the process see Review_Steps.
If an API is supposed to stand the test of time it has to preserve the functionality that others are using, it has to be backward compatible. Some tests for compatibility are easy, some require more work, but the testing is necessary otherwise nobody can guarantee quality when the API is evolving.
[#sigtests" Signature tests] are simple starting point, [#unittests unit tests] are very good for verifying the contract between a public API and its clients. Some people claim that unit tests are poorly named since they imply that they are QA's responsibility, but the development engineer is the one that really benefits with several advantages:
Read more about possible [test-patterns.html test patterns] that we use and how they can contribute to improvements in quality of your module.
There is an automated verification task that is executed after every daily build that checks signature of classes and their fields and methods and sends reports to api-changes mailing list. Its reports contain both incompatible and compatible changes. So one gets notified not only when something is broken, but also in case of accidental API change like addition of a method by forgetting to make it private.
By default the tests check all classes in [api-design.html#category-official official] packages. E.g. org.openide.*, org.netbeans.api.* and org.netbeans.spi.* and recently also org.netbeans.jmi that are part of modules included in daily build of standard IDE and also those that are daily uploaded to Alpha Update Center. That is why in order to have these tests running on own module one has to package the API into one of the official package (or request his own package to be added into the test) and make the module part [#alpha alpha autoupdate configuration].
org.openide.*
org.netbeans.api.*
org.netbeans.spi.*
org.netbeans.jmi
Any questions related to the sigtest framework can be either sent to nbdev@ or to Rudolf Balada who maintains the daily build (including sigtest) infrastructure.
Very important [test-patterns.html verification of quality of an API] is an automated test suite. Most of NetBeans modules uses our test harness called xtest which is based on JUnit and enhances it with a few additional features (tests should inherit from NbTestCase) and configuration framework.
The simplest way how to make your module testable is to copy the test directory from a small modules that already provide some tests and modify it (e.g. html). The needed changes include classpath modifications for compilation and execution in build.xml, correcting the list of tests in cfg-unit.xml and of course the placing your own tests into unit/src directory.
test
build.xml
cfg-unit.xml
unit/src
The last step is to include the suite in daily execution of unit tests. For that it is enough to modify the xtest/instance/master-config.xml to include your module in the unit-nb test config. Verify that you have done everything correctly by running
ant -f nbbuild/build.xml unit-validation
and checking that your tests were successfully executed. Since then make sure that your module tests really run and pass, as since then other people start to use these tests to verify validity of their own commits. And you should not cause false alarms by problems in your code.
Also consider to subscribe to notification framework to get email notifications about automatic failures, if you can reach the URL.
The important part in a [apidesign.html#life life cycle] of an API (as well as any other product) is the feedback from the users. In order to get it one should let your users know that there is an API and allow they to try it. For that purpose NetBeans use its Alpha Update Center. Curious users may enable it and that way be informed about latest development achievements.
To get a module into the Alpha Update Center one needs to make sure that the module's build.xml file has netbeans, clean, and nbm targets that work in the normal way - normally this is accomplished trivially by making a projectized module and not overriding any targets from the default build harness. Then one can add entries for the new module to nbbuild/build.properties in the list config.modules.daily-alpha-nbms. Use
netbeans
clean
nbm
nbbuild/build.properties
config.modules.daily-alpha-nbms
ant -f nbbuild/build.xml check-module-configs
cvs diff ide/golden/moduleconfigs.txt
to review your changes (check in the new moduleconfigs.txt too).
One should test the NBM building process on local disk by making sure you have everything of interest checked out from CVS, opening nbbuild as a project in the IDE, and selecting Build Daily Alpha NBMs from its context menu. If something is messed up, mail gets sent to broken_builds@netbeans.org so it can be corrected. It is a good idea to notify aumasters@netbeans.org too.
nbbuild
broken_builds@netbeans.org
aumasters@netbeans.org
The content of Alpha Update Center is refreshed every day. The new version of module NBM is build from trunk and specification version in its module manifest is compared to the specification version of already uploaded module. If the new one is greater, the new version of the module replaces the old one.
This means that one can consciously and automatically upload new versions of an API from trunk to its users just by increasing the specification version in the CVS manifest file.
Usually an attempt to produce an API requires longer development time and it is useful to mark it as not being finished yet. NetBeans use a set of [api-design.html#life stability categories] for that.
The expected scenario is that a module with an API starts its development in a CVS sandbox (contrib.netbeans.org) or as a regular netbeans.org project (like xml.netbeans.org) but is not part of the regular build. Then it is offered on [#alpha Alpha or Beta AutoUpdate] (early access mode) and one can work on its finalization. As the module is not part of a stable release, it can be modified in incompatible way. As soon as one thinks that the API is fine and it satisfies quality criteria, it can be put on Stable Update Center or even find its way into standard distribution.
There is however one restriction. In order to make it easy for API users to find out what is stable API we have come with a simple description: If a class is in org.netbeans.api.* or org.netbeans.spi.* packages, and is part of a stable release, then it is stable] one and use all the infrastructure for API development that is available. Moreover this approach is not as strict and allows the API to be part of a release even if it has not reached enough stability for unlimited amount of time.
There can be situations when strictly following the official namespace restriction may cause a lot of troubles to early adopters and hurt NetBeans acceptance and competitiveness. Sometimes one needs to provide an API quickly, cannot guarantee that it is that under development] in official namespace if following is guaranteed:
The purpose of this temporary release is to encourage early adopters to test the APIs in real world and provide feedback to make them better. The publishers of the APIs in return promise to stabilize them soon and do that carefully considering the user impact. As a result NetBeans should be able to deliver important APIs as soon as possible, offer them to early adopters and increase API quality by incorporating the feedback. The adopters could be sure that the APIs will be stabilized in a given time frame and that they will not need to do major changes (like repackaging of all import statements, which was the current strategy) when the API becomes stable. In order to clearly communicate the exceptional state of the APIs, there should be a visible and non-ignorable stamp that such APIs are [api-design.html#category-devel under development]:
org.netbeans.api.projects.ant/0
org.netbeans.api.projects.ant/2
org.netbeans.api.projects.ant/1
These suggestions shall ensure that the right balance is achieved between produces and consumers of any NetBeans API. It should however be stated that nobody shall be restricted by official namespace restriction or bound by a time limit of the temporary release, because if one does not want to create stable API, one does not have to. Contributing to [api-design.html#category-official official] API set shall be a privilege as such effort requires more attention and increased amount of work that only those who really care are willing to invest.
|
http://wiki.netbeans.org/wiki/index.php?title=API_Development&amp%3Boldid=48405&printable=yes
|
CC-MAIN-2019-51
|
refinedweb
| 3,200
| 54.32
|
Repeals Carbon Tax
I think you misunderstand the concept of a mandate. It is used when a party wins an overwhelming majority in the lower house to suggest that the upper house, where the party has a minority, should capitulate on a particular issue. I.e. suggesting the checks and balances should step aside. It has nothing to do with complaining about losing in a democracy.
Personally I think a mandate almost never exists. In a representative system you might say that all elected members have a mandate to vote as they campaigned.
Half of Germany's Power Supplied By Solar, Briefly
Very informative. The density of the German population makes sense, but can you explain why Germany, a nation with a lot of manufacturing, would have less troubles than the US with balancing?
P.S. I'm hoping the answer is their lack of strong sunlight, for delicious irony.
Robert McMillen: What Everyone Gets Wrong In the Debate Over Net Neutrality
All caches have a non-zero miss change, it doesn't mean the concept is useless. In this case it could still work. If it predicts your Sunday night show 80% of the time, then for the 20% miss you'll have to stream from the original server. However if the 80% applies to all customers then network congestion has been significantly reduced and your streaming can work at HD.
Ask Slashdot: Where's the Most Unusual Place You've Written a Program From?
It's not stated but my assumption is that the admin was removing his/her sig.
Sifting Mt. Gox's Logs Reveals Suspicious Trading Patterns
How does fractional reserve work with Bitcoin? If they loan out some deposited coins, fair enough, but if the original owner draws on it they cryptographically need that exact coin, not some other random one.
Or do depositors hand over the coins to the exchange for a virtual account, destroying the signed-ownership benefit of Bitcoin and replacing it with trust in an institution?
Places Where the Silicon Valley Bubble Could Pop
It's worse than that, half the posters don't even read the post they're replying to, they just want to say their bit.
What It's Like To Be the Scientific Consultant For The Big Bang Theory
Spot on, it's basically a modern version of Friends with a few popular science references. It's not about science, it's character and relationship-based and easily digestible.
How Concrete Contributed To the Downfall of the Roman Empire
Without military force, [they] developed the art of religious coercion and control, and the Roman empire eventually became the Holy Roman empire.
Perceptions are interesting, my understanding was completely the reverse, that as the empire declined the church had to pick up the slack of looking after the population, and so developed the attributes of a state rather than being a pacifist religion.
And the money continued to flow to Rome, for centuries....
A good point, it helps explains how the empire survived for centuries after its territory had ceased expanding. I think we see the same effects today within former colonial powers like Britain and France. The trade connections don't disappear just because they no longer officially own the country.
How Concrete Contributed To the Downfall of the Roman Empire
the myth tells people what they want to hear: A good morality tale, supporting their own particular morality.
To be fair we're all subject to this. Everyone has a particular understanding of how the world, and people, work, borne of years of experience. Any story that goes against our understanding is naturally suspect, and rightly so in most cases. Imagine if you adjusted your view of the world to accommodate every Facebook story on the miracles of coconut oil or what have you.
Is Crimea In Russia? Internet Companies Have Different Answers
The people of Crimea however should get to decide where Crimea goes. The only thing Russia can do is either offer to annex or refuse to offer to annex if the Crimean people wish to be part of Russia.
How far should this be taken? Should the majority-Ukrainian sub-districts within Crimea get to decide if their sub-district is part of Ukraine or Russia? How about individual Ukrainian households within Russian-dominated sub-districts?
Managing populations is messy and there is no clear moral principle around geography, but Crimea was a Ukrainian state, under a constitution that required a nation-wide referendum. The question is, at what point are things so bad that you should violate another country's sovereignty?
Russia clearly did more than offering to annex. They actually ran the referendum after sending their military in semi-covertly, ensuring history books will forever question the legitimacy of the vote.
Elite Violinists Can't Distinguish Between a Stradivarius and a Modern Violin
It is actually about 'playing in' the wood. This is more noticeable with acoustic guitars (and violins) - they will sound radically better after even a few months of playing due to the vibrations changing the wood. I suspect it softens the wood allowing the instrument to vibrate better across its entire mass but don't know the details.
There are diminishing returns, and I do agree that thinking a 70 year old guitar is intrinsically better than a 5 year old guitar is mostly psychological.
Elite Violinists Can't Distinguish Between a Stradivarius and a Modern Violin
since even passing the sound over an HDMI link will cause the signal to be degraded
Can you explain this? I would have thought the signal would remain digital and at its original sampling rate.
Was Eich a Threat To Mozilla's $1B Google "Trust Fund"?
If you actually believe in people's rights to have private views that are not acceptable to some, then you have to accept that they will take action based on those views. That's the whole point of having views and protecting people's non-mainstream opinions right?
This would be insightful if you clarified that you meant political action. Clearly, you are not allowed to physically disrupt a legal gay wedding any more than you can lynch a black person simply because your views call for it.
But I do agree, there's no point in saying "this is a democracy, so you are free to believe A so long as you never tell any one nor act on it". Future mind-reading tech would soon reveal what that 'liberty' was really worth.
Was Eich a Threat To Mozilla's $1B Google "Trust Fund"?
Sort of, the problem is that the people don't want their leaders to represent all of society. They want them to hold the same personal beliefs they do and really believe in their agenda. So we get
"The only clear solution is X, because of A, B and C. Opposing arguments D, E and F are completely stupid and my political opponents are crazy." instead of
"I have come up with a solution I believe is the best compromise for all parties. It is a slightly better compromise than my opponent's."
You will note that in the ideal situation, personal beliefs are irrelevant. Eich was not in an ideal workplace.
OKCupid Warns Off Mozilla Firefox Users Over Gay Rights
Don't anti-discrimination hiring laws override freedom of association for businesses? A business can't refuse to hire a qualified employee, whether CEO or cleaner, based on their race, religion, sexual orientation or political views, so it makes little sense to boycott a company for following this law.
OKCupid Warns Off Mozilla Firefox Users Over Gay Rights
Any criticism could damage their reputation, their ability to get work
The argument is that people should be allowed to express opinions and criticise opposing opinions, but not discriminate in hiring practices based on those opinions (this is professionalism). You are trying to argue this is an inconsistent position because people will discriminate and therefore, by their own argument, criticism could not be allowed either. It doesn't follow, it's a confusion of 'should' and 'will'.
What you are actually trying to argue is that some opinions are so blatantly wrong or harmful that they should be banned, or at least not be able to be publicly expressed, and that non-discrimination policies should not apply to those opinions.
OKCupid Warns Off Mozilla Firefox Users Over Gay Rights
People blithely unaware they're acting much the same as those who opposed civil rights laws in the first place.
I don't think it's hypocritical be to 'intolerant of intolerance', but you're spot on here.
Ironically, the Mozilla employees calling for his resignation have actually created a discriminatory workplace. He is a single employee at the company whose political/religious views led him to make a private donation. The fact that his role is CEO is irrelevant. The employees are saying "We want our hiring policies to discriminate based on political and religious views".
Book Review: Money: The Unauthorized Biography).
Famous Paintings Help Study the Earth's Past Atmosphere.
Ask Slashdot: What Do You Consider Elegant Code?
Sort of related. If I add some meaningless drivel here then slashdot won't consider my comment to have too few characters per line. Somewhat ironic that a story about elegant code forces code to be condensed onto less lines.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int i;
switch (argc - 1)
{
case 0:
printf("No args\n");
break;
case 1:
printf("One arg\n");
if (strcmp(argv[1], "hello") == 0)
{
printf("Hello yourself!\n");
break;
for(i = 0; i < 5; ++i)
{
case 2:
printf("Two args\n");
}
break;
case 3:
printf("Three args\n");
break;
default:
printf("More args than I can count\n");
break;
}
}
return 0;
}
|
http://beta.slashdot.org/~TranquilVoid
|
CC-MAIN-2014-35
|
refinedweb
| 1,641
| 61.26
|
Microsoft Releases ASP.NET MVC Under the Apache License 177
mikejuk writes "Microsoft has announced that they are being even more open with their new approach to ASP.NET MVC. It is making ASP.NET MVC, Web API, and Razor open source under an Apache 2 license. The code is hosted on CodePlex using the new Git support ... You can compile and test out the latest version, but if you do have anything to contribute you have to submit it for Microsoft's approval." To get code upstream Microsoft has to approve (pretty typical), but the git branch is supposedly tracking the latest internal release candidate branch (a bit better than Google does with Android, even). Things seem to have changed quite a bit since the days of Shared Source (tm).
anyone see the flying pigs outside? (Score:5, Funny)
i just looked and saw one fly past the empire state building
Re:anyone see the flying pigs outside? (Score:5, Funny)
Re: (Score:2)
BSD?
Re: (Score:2)
No, it's been cold there for a long time.
HAHAHAHA I completely deserve to be modded down for this one.
Re: (Score:2)
Being that ASP.NET only runs under Microsoft
Visual Studio is decent, nothing more (Score:2)
Re:Visual Studio is decent, nothing more (Score:5, Insightful)
I have seen the same (and worse) with people developing on JBoss and Java. What's your point? That some developers are bad? Honestly, working day-to-day in VS2010, NetBeans and Eclipse, VS is by a good margin the better IDE. C# is what Java could have become had its development not been handed over to Yet Another Committee With a Decision Making Disorder (TM). In many ways, C# is moving closer to good stuff like Ruby and Rails (and Sinatra). Look at what the Play! Framework guys did with version 2.0. Not implement it in C# obviously, but look at their rendering engine. Highly Razor inspired.
Prior to v 6, IIS was junk. At 6 it was OK. IIS v7 is actually very good.
On the other hand, if someone ever asks me again to maintain a Web Forms (often known as ASP.NET) project, I will decline the kind invitation. If they insist I will leave the company. Web Forms is (IMnsHO) an abomination. As is JSP. Same with the horror that is JBoss Seam.
Re:Visual Studio is decent, nothing more (Score:5, Insightful).
Re:Visual Studio is decent, nothing more (Score:4, Interesting)
Re: (Score:3)
I'd have to disagree on your point about JEE. The problem with JEE is only that, like C++, it's powerful and flexible enough to be easily abused. Which isn't surprising, since it's an integration of core concepts and technologies from such a wide variety of transaction processing tools and environments first, and a web service provider second.
i.e. JEE was designed to replace software stacks like Encina and Tuxedo, including the integration of various messaging protocols. Although it supports web devel
Re: (Score:2)
Most of what you listed are web frameworks.
If you compared raw C# to PHP code, you *might* have a good debatable argument, but if you want to compare apples to apples you need to compare the following.
ASP.NET MVC *vs* Symfony2
ASP.NET MVC *vs* CakePHP
ASP.NET MVC *vs* [INSERT PHP FRAMEWORK]
If you actually questioned writing a CGI app in C# vs PHP then you *might*
Re: (Score:2).
If you do web/enterprise development in Java (or deploying a war/ear), you are using JEE. Now if you are referring to J2EE as EJB 2.x (which are not the same terms), then you have a good point (EJB pre 3.x is an abomination.)
JSP is not bad either (it's actually decent) IIF you keep logic outside of the view (but that is true of any MVC framework.) Similarly if that law is broken, yeah, JSP is crap (just like anything else.) The problems that have plagues JSP usage is the same that plagues any other framew
Re: (Score:2)
Ever have to chase down an issue running PHP with IIS?
Yes, with v7. It was really easy - the hardest part was deciphering PHP's shitty error messages.
Re: (Score:2)
Ever have to chase down an issue running PHP with IIS? It used to be a snap with 5. 6 made it more difficult. 7 made it impossible, if you were able to get the non-MS platform to work with it at all.
Funny you should say that. It has never been easier to get PHP running than it is on IIS 7. Two clicks in the Web Platform Installer and you have a working PHP installation. Three more clicks in IIS Manager and you have a working, and pretty well-configured, PHP installation. Need to run two versions of PHP for different sites on the same server? Guess what? It's just a few more clicks. Enable and Disable PHP extensions? One click. Since we updated to IIS 7.5 (Server 2008 R2) from IIS 5 (Server 2K),
Re: (Score:2)
VS itself is also growing "more free" as time goes by. On one hand, there's web edition of VS Express, which is slowly growing in features that were previously only available in paid editions - in v11 it's (finally) going to get unit testing [hanselman.com], for example. And then there's also the free WebMatrix, which basically tries to steal the "no-framework PHP" cake.
Re:anyone see the flying pigs outside? (Score:5, Insightful)
Let me guess, you do all your code in vi?
Or perhaps you write code by shaking a magnet over your hard drive in just the right way?
Visual Studio is a good IDE regardless of your experience level. It is comparable to eclipse. Each has areas where it is a bit better than the other, but few major deficiencies.
Re: (Score:2)
Re: (Score:2)
I don't use vi, I use butterflies! [xkcd.com]
Re: (Score:2)
Re: (Score:2)
It depends on the application. If I just want to whip up a simple app with a decent interface, it's hard to beat Visual Studio. However, for more complex projects, I'm just as likely to use Notepad++ as VS depending on what I need. VS is a tool like any other - it's the best tool for some jobs, a decent tool for other jobs and the completely wrong tool for many jobs..
^^^ This. I used to be a Emacs (and later Vim) hard code user, doing all of my coding (for-a-living-coding), first in C++ then in Java. That was the shape of things until 2005-2006'ish when I ran into Eclipse. I've never looked back. Be it the JEE version or the CDT version for C/C++ development (which is what I currently use at work), I wouldn't go back to my Vim ways. Don't get me wrong, I actively use Vim next to Eclipse CDT, but there is text editing/single file coding, and there is project development
Re: (Score:2)
ctags just doesn't compare with a modern IDE and a modern metadata-laden language. Vim just simply can't do that stuff because Vim wasn't designed to do it. Vim is a text editor, and a damn good one at that, but it is just a text editor. Right tool for the job and all that. I'm sure we'll both get downvotes from the old-timers...
Re: (Score:2)
Re: (Score:2)
and for people with enough brains to not have to try to enlarge their balls by using 1980s technologies to develop software. Like VI(M) for example. A great editor for editing text files, but not a tool for developing software.
Using modern technology for real, in-the-trenches work does not amount to good street geek-creed among the junior/senior year l33t hax0rs and the "That's 70's Show" crowd.
Re: (Score:2)
I have to admit Visual Studios is a Decent IDE.
Yes, it is...for beginners.
wow, that means i've been a beginner for 30 years now. and using Visual C++/Studio for 20.of those.
I can't wait until I get good enough at this programming malarkey to use a real IDE like vi/make.
Re:anyone see the flying pigs outside? (Score:4, Insightful)
I have to admit Visual Studios is a Decent IDE.
Yes, it is...for beginners.
Someone seems to have a fond for tooting his own game console l33t hax0r horn. That is one of the most meaningless, most juvenile posts I've seen in a while. What the hell does that mean anyways?
Re: (Score:2)
Might want to lay off the drugs a little. Nothing has changed here. You still have the same issues you've always had with apache v2 which is that it's basically the same as the BSD license with the same problems.
Re:anyone see the flying *chairs* outside? (Score:2)
i just looked and saw one fly past the *space needle*
TFTFY
Re: (Score:2)
Must be Pink Floyd on tour again.
Re: (Score:2)
If they don't, just fork it.
MVC is already about 5x faster than the old
Re: (Score:2)
wtf, it cut off half my coment. MVC is already faster than the old ASP.NET WebForms/Viewstate model they used to use.
Re: (Score:2)
Now if they'd do the same thing with MFC and AT... (Score:3)
...my job would be easier. I have the source code. I hit the bugs. Sometimes it's even obvious how to fix them...
Re: (Score:2)
Sometimes it is much easier to code a work around, report the bug and continue on.
Re: (Score:2)
MFC isn't a program, it's an MVC framework library combined with a C++ wrapper around most of Win32, which itself is mostly organized as OO, even though it has a C API. And when things don't behave the way you expect, you're tracking it down anyhow. Once you've worked with MFC (or any library) for five years, you're going to know parts of it at almost as well as your own code--and, given that the framework represents a hotpath for you across multiple projects, you'll know parts of it better than your own co
Re: (Score:2)
MFC isn't a program, it's an MVC framework library combined with a C++ wrapper around most of Win32, which itself is mostly organized as OO, even though it has a C API.
Do you need more coffee this morning/[time of day where you are]? MFC [wikipedia.org] is not an MVC [wikipedia.org] framework. It is (as you say) a sometimes precariously-thin OO wrapper around the native C-based Windows API..
Re: (Score:2)
My apologies; I misspoke. MFC implements a document/view architecture, not a full MVC. WP article is still critically lacking on that point.
Re:Now if they'd do the same thing with MFC and AT (Score:5, Informative).
Microsoft has made it entirely possible for many people who work with it to move on to different frameworks, but has responded to developer pressure to keep MFC alive and maintained. I doubt it's one of their priorities, but it's better than where things sat with the release of VS2008. VS2010 has improved MFC, and it sounds like VS2011 is marginally better, with its first-class support of C++.
And while I'd love to ditch having my code support anything older than Vista, that's just not going to happen any time soon. My code isn't written for the mass market, it's written for specced use cases, which includes things like supporting WinXP and even (at times) Win2K. If you're writing a new application every year, or doing a major refactor of your code every couple years, you can keep with the times and depend on bleeding edge libraries.
If you're working with a large legacy codebase with install sites over a decade old, you're not going to be jumping at Metro quite yet. It probably isn't going to be until Windows 9 before Microsoft stabilizes their new platform enough to be worth porting code forward. Look at 95 vs 98 vs ME, and then XP vs XPSP2 (which really could have been a new operating system...), and then Vista vs Win7. Microsoft tick-tocks between "what fresh hell is this?" and "Whew! That's a relief!".
Re: (Score:2)
Don't assume you know my problem domain, because you quite clearly don't. You obviously don't even know the kind of user my UI faces, or that VS2012 isn't even in public beta yet.
Your lack of knowledge shown in your judgement is beyond laughable; it's pitiable. Hell, it shows you didn't even bother to read the comment you replied to. You didn't pick up on that I noted I'd love to set a minimum of Vista, and you didn't pick up that my requirements are driven by my customers.
Either that, or you're just anothe
Re: (Score:2)
VS2012 isn't even in public beta yet.
Yes it is. [microsoft.com] It's even supported for production code. They just don't call it 2012 yet since the RTM date hasn't been set.
Re: (Score:2)
I'm well aware of the product currently called "Visual Studio 11 Beta". And is even called such in all the per-version MSDN API docs. If they rename it when they RTM, so be it. If not, then not.
Re: (Score:2)
I suppose you didn't bother checking out SQL Server Denali until it got renamed SQL Server 2012 in the RC either? Actually, that would be too early as well. You probably want to be certified first before using it. hehe.
Re: (Score:2)
So you knew it was released and then made an intentionally misleading statement because of splitting hairs over the name. What a dick move.
No. It's not called Visual Studio 2012. It's called Visual Studio 2011 Beta. The person I was replying to used the term "VS2012", and that's the term I responded to. No misleading statement was intended. If he meant the VS2011 beta, that's the product he should have referenced. We're (presumably) computer programmers talking about computer programming, for crying out loud; accuracy and precision count.
Re: (Score:2)
My god you are a moron. It's 11, as in the current version number not the year it will be released.
Doh. So it is. I've simply been misreading the various blog posts about it for quite some time.
Notice how it is 2012 and the beta was just released.
I'm a C++ developer. Remember C++0x?
It will almost certainly be called 2012 when it is out but it isn't called that yet because the RTM date hasn't been announced. Visual Studio 2011 simply does not exist in any way and will never exist.
Probably correct; you're right about the version number. I grew accustomed to VS2010's being VS10.
Re: (Score:2)
Your customers directors, managers, CIOs, and others will be using
...... Iphones, and Andriod devices to view their web apps and to get work done
fixed that for you.
Re: (Score:3)
Re Now if they do the same thing with MFC and ATL (Score:2)
Waiting for approval is a valid point, but anyone who spent time using MFC probably has their own list of things that drive them bonkers, and most likely know where the fix needs to be. Screw approval, fix it in your code and ship the result linked statically.
No joke, even the C/C++ headers in MSVC 6 are broken, and due to licensing issues Microsoft can't release a patch for it. People just fix it locally and it's done. Of course, this is mostly STL, so it's not in the runtime DLL files so you could stil
Wow (Score:2)
I am pretty impressed. I honestly wonder how this will effect the web development industry moving forward.
Re: (Score:2)
If you can replace effect with implement, accomplish, you're kosher (effect is also a verb). For instance, the GP may be saying, "I honestly wonder how this will actualize the web development industry moving forward." This statement is fully buzzword-worthy for use in water-cooler dialog, and therefore not out of the realm of possibilities of intended meanings.
Grammar Nazi: 0
Useless Pedant: 1
Re: (Score:2)
There are lots of people already doing it, including some mega-corporates such as Netflix.
Two Groups (Score:2)
It seems to me there are two groups inside Microsoft -- Developers and Managers. Developers want to do things like this. Managers want to prevent things like this. Looks like the devs won this one.
Re: (Score:2)
no, it was ray ozzie or some other guy. can't remember the name. MS hired him and he gave them a business reason to use and support open source standards compliant software.
Re:Two Groups (Score:4, Interesting)
It's not just developers and managers as groups. Remember, that these days Microsoft is a huge organisation and is full of many different divisions. There's Windows, Office, XBox, Windows Phone etc. amongst many others.
The guys that are responsible for this move are the "Web Dev Div", who are a sub-group within the "Developer Division".
It contains many people, including guys like Scott Guthrie [wikipedia.org], Scott Hanselman [hanselman.com], Phil Haack [haacked.com] (who recently left to join GitHub) etc., who have always done things that don't seem very Microsoft-like, like releasing ASP.NET MVC as an open-source product [asp.net] - albeit one that didn't accept outside contributions - back in 2009 along with such moves as bundling things like the open source jQuery library with Visual Studio and openly committing improvements [stephenwalther.com] back to the core project without trying the usual embrace, extend, extinguish tactics. [jquery.com]
Within certain parts of Microsoft, they can, have done, and are continuing to do some very interesting, worthwhile and generally community-friendly (and not-so-evil) work.
Re: (Score:2)
I didn't mean it as formal divisions, I meant there are two types of people working there -- those with a developer mindset (mostly like the devs you mentioned) and
....others. It seems to be mostly the developers (in whatever division they work in) that want to do the cool things.
Re: (Score:2)
two groups inside Microsoft -- Developers and Managers
you forgot the accountants and the lawyers. those are the key players in this regard.
ASP.NET MVC is OK, but C# is awesome (Score:4, Insightful)
Re: (Score:2)
Absolutely. I'd love to see Microsoft either provide a high-quality cross-platform
.NET implementation, or at least release the core CLR stuff. C# has a lot of really interesting stuff going on in it.
Sad part is the community has been wrong about C# (Score:2)
Prior to Sun being bought by Oracle, you could be forgiven for thinking that Java was the safer patent bet. However, now we are about a decade into the conversation and the "safer" platform is the one where there is a major patent battle while Microsoft has never once even bared its fangs at Mono. I think the difference comes down to this...
For Microsoft, C# is just a gateway drug to making Windows apps. Microsoft honestly doesn't give a rat's ass if you are building products with C# or any other aspect of
Re: (Score:2)
Python is my favourite language, and C# is my favourite high-level language. Compared to Java there's much less boilerplate required, and there are plenty of features which make it IMO pleasurable to program in. While it occupies a comparable niche to Java, the difference is light and day in terms of developer joy.
Re: (Score:2)
Or better yet not have to deal with any of that shit and use a language that does not require it.
Re: (Score:2)
Re: (Score:2)
Honest question: Which of those C# features does Visual Basic not have?
The latest VB should have most, if not all the features as the latest C#. That said, the syntax of VB, along with the lack of short circuit conditionals, makes my eye twitch.
Re: (Score:2)
It never ceases to annoy me that VB has a way of re-attempting a code path that caused an exception after resolving the exception cause but C# does not short of goto. In a catch block in VB, you can simply call "ReTry" and it will re-attempt from the line that failed.
So there's even areas where C# is beaten by VB.
Re: (Score:2)
There always seem to be times where VB has features over C# or vice-versa. Many of those features are library-based, so you can just include the VB library in a C# app and gain access to functions such as Str() and the My. namespace. Fact is, most of those things are still accessible to either language, there are just helper classes and methods which make things easier for their respective lagnuages.
I wasn't aware of the language feature you described though, which certainly can't be just "referenced" from
Re: (Score:2)
When I am coding C#, if I notice a build up of boilerplate, it is usually a sign that I am not taking adva
Re: (Score:2)
Partial classes - bloody genius idea. IDE-generated boilerplate in one file you don't touch, your code in another.
Re: (Score:2)
Re: (Score:2)
Yes, after coming from Java there were quite a few "ooh, shiny" moments that made me happy. I keep meaning to write something just to play with LINQ.
Re: (Score:2)
I don't get this argument every time it's brought up. Microsoft has a notice on their website promising that they will not peruse patent cases against people for using custom C#/.NET implementations. Not only that, Microsoft, with the release of
.NET, released a cross platform open source implementation of CLR, free for use and study. Sure, people will often counter this by saying that Microsoft can't be trusted over some silly "promise". To which I say, really? I could understand that if it was some unoffi
but this makes sense. (Score:3)
IANAP, but if:
.NET tools...
Windows 8 is focusing on HTML5 and JavaScript.
Microsoft still wants to sell
then open sourcing. NET makes sense. give away the handle, sell the blades.
Re: (Score:2)
Windows 8 isnt focusing on HTML5 and JS - its just adding them as a development pathway. Don't believe all of the outrage stories, they invariably aren't true...
Re: (Score:3)
Windows 8 is focusing on HTML5 and JavaScript.
Win8 Metro apps can be written in any of: C++, C#/VB, JS (out of the box, third parties can add support to their own languages as well). Of those, I personally find C# to be the most convenient, simply because most Metro APIs are async only (to force developers to never block the UI thread with some expensive call), and C# has nice syntactic sugar for this in form of async/await [microsoft.com], whereas in both C++ and JS you have to manually chain callbacks with x.then(y).
Re: (Score:2)
Except, they're not open sourcing
.NET, just MVC4. .NET will never be open sourced, because it ties too much into the OS.
Is Microsoft still evil? (Score:5, Interesting)
All evidence points to Microsoft no longer being "evil". At worst, maybe jerks, but not evil:
Internet Explorer is following standards about as well as everyone else
Windows is no longer a horrible, bug-ridden mess - the main complaints are "it's too similar to the last one, no need to upgrade" and "they're changing the interface too much AND I DON'T LIKE IT"
The 360 is fairly open, by console standards, even with "official" homebrew via XNA (you need to buy a license, but it's not a $100,000 developer's license)
They've been submitting a lot of code to open-source, using *actual* open-source licenses
Their stuff works well wit virtualization under Linux, and their VM will run Linux (face it, the Old MS would have made it near-impossible to run Windows within Linux)
Now, they're still far from my favorite company, but I for one am willing to reclassify them from "lawful evil" to "lawful neutral".
Re: (Score:2)
Those sound about right. I just wish the respective managements of (e.g.) Google, Slashdot, and Canonical didn't almost-proportionally regress as Microsoft slowly morphed from Hellspawn to New And Somewhat Improved Hellspawn.
I mean, between Google+, Slashdot TV, Unity (I tried that, and it made me move to Arch Linux with KDE, with a layover in Kubuntu 11.10)...
Re: (Score:2)
Google, Slashdot, and Canonical didn't almost-proportionally regress
While I think Slashdot does need to get criticism from time to time, I don't think the editor issues are on the same level as Unity & the cult of Jobs.
Re: (Score:2, Interesting)
Re: (Score:2)
NDAs are everywhere in business. Everywhere. You can't call a company "evil" for using them without diluting the word "evil" itself.
And, while MS does have a huge patent portfolio which is a significant potential threat, I don't actually know of them *using* it the way you describe. They sue other companies, sure, but I have not yet heard of them suing an open-source project for patent violations.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
The problem is, did they do these things because they're trying to fix their reputation, or because they realize they're screwed if they don't?
I'm inclined to believe the latter, because based on other behaviour (such as the mutilation of the standards process with their open office file format) indicates that they are still doing whatever they can to screw everyone around them and maintain control.
For example with IE, they *had* to make a standards compliant browser because their gamble to control the inte
The target is different (Score:2, Troll)
Microsoft is aggressive towards their competitors, as was IBM in its day. Both had antitrust problems. Google and Facebook are aggressive towards their users. They have privacy-invasion problems.
This is the price of ad-supported "free". Microsoft wants you to buy their stuff. You're the customer. With Google and Facebook, you're the product.
Re: (Score:2)
But, in Microsoft's case, all I see is case-by-case desperation that lacks the overall cultural change that IBM and Sun went through. Their browser was los
Re: (Score:3)
Sun's dramatic turnaround? You mean the turnaround from being a thriving business to one that no longer exists? Somehow I don't think Microsoft is looking to make such a dramatic turnaround.
IBM is not some open source 'angel' either. Sure, they have made some great contributions to open source projects. However, they wisely view open source as just another tool in their toolbox. They support open source as long as it benefits them. They do not belong to some open source religion. Sure, they support. Regarding indie development, they treat that like a disease these days and hope it goes away. But in the early days that was a bonus for the PC gamers that they.
Look at who else there is: Nintendo, whose development kit costs more than my car, and Sony, who's fucking Sony. Compared to that, a $100 XNA license is as open as it gets, especially since it doubles as a WP7 license, where it's on par with Apple (nobody really *wants* to write for WP7, but it's a decent gesture.
Oh, and Nintendo originally charged for the web browser.
Regarding indie development, they treat that like a disease these days and hope it goes away. But in the early days that was a bonus for the PC gamers that they converted to xbox. But now they brag about how much time is spent *not* playing games on the 360 as they try to out "causal" Nintendo.
The XBox actually has the best indie support of any console. They're still terrible compared to Steam, but you judge evil based on its compet
Re: (Score:2)
Re: (Score:2)
First, it's not like Microsoft tries to hide that Starter is horribly crippled. You bought the Shit version, and now you're complaining that it's shit?
Second, how did you even buy it? From what I can tell, it's only sold to OEMs, primarily in "emerging markets" (I've never even *seen* a Starter copy in the US - every computer I've checked out had Home Premium or Ultimate). So you bought from a cheap-ass vendor, and then complained that it came with the crappy version of Windows, instead of seeing it as a di
re Approval Required jibe (Score:4, Insightful)
So, tell me, which flag ship open source projects main branch can you just merge your code into without approval? The Linux kernel? Apache? X? MySQL? Firefox?
Thats a fucking pathetic jibe "Unknown Lamer", not something an editor should be making.
Re: (Score:2)
Actually, rereading it you can take that in one of two ways, either "thats typical for projects of this type" or "thats typical of Microsoft *rolleyes*" - I took it in one way, which probably means others will as well. Apologies if it was meant in the other way.
Re: (Score:3)
To get code upstream Microsoft has to approve (pretty typical)
So, tell me, which flag ship open source projects main branch can you just merge your code into without approval? The Linux kernel? Apache? X? MySQL? Firefox?Thats a fucking pathetic jibe "Unknown Lamer", not something an editor should be making.
I read (pretty typical) as (this is standard practice for most big projects like this). It took your mini rant for me to consider that it could be derogatory.
I'm all against editorials in my summaries, but I think your freaking out about the wrong thing here.
(Unless of course it was meant in the way you have interpreted - in which case, yes, by all means fuck that guy right in the ear!)
New Approach (Score:5, Interesting)
Microsoft now seem to have a really good grasp on how to deal with free software. They know they need to get developers and administrators to incorporate or use their products in part, rather than use the defacto standard free software, and that means they need to be interoperable and compatible.
A conference I attended for CakePHP in Manchester 2011 was sponsored by Microsoft, they provided a 3 course meal and contributed towards the bar tab for attendees.
They know the way to a geeks heart - food and beer - and they also know that they need to get free software communities to build support for Microsoft platforms as well as the free platforms. For example the CakePHP community, Microsoft went to great efforts to ensure that the MSSQL database abstraction class was improved by the core developers to better support the MS platform. Now I can at least choose between MySQL and MSSQL, and there's a chance I'd buy and license it for a particular application.
This attitude from Microsoft isn't new, but I don't really see them being able to execute the "extinguish" part of their normal plan on GPL/BSD/MIT licensed software. Instead I can see them at grassroots level trying to make their platform relevant and make sure people can hook into it, but they get left on the sidelines.
holy TLA! (Score:2)
Playing TLA Bingo in our developer meetings will get too easy if this continues
Re: (Score:2)
You are aware that Apple is running and sponsoring some of the most popular open-source projects on the web, right? Also, that Steve Jobs is dead?
Re: (Score:2, Insightful)
And Microsoft dumps billions into collaborative research. Many modern system designs from CPU to memory to IO to Networking were spear-headed by MS research. I can enjoy the stereotyping of MS as a soulless company that ships insecure products while adding nothing of value.
Some times we like to stereotype for fun. This is why Taantric said '[...]honour Steve Jobs with the "Borg"
/. thumbnail'
Anyway, you can't deny that Apple got to 100bil without price gouging(aka ripping off) its customers. They may have a
Re: (Score:2).
Re: Time for a change (Score:2)
nobody was tricked into buying an iPod or iPhone
But on the other hand, the iPad... [abc.net.au]
Re: (Score:2)
Re: (Score:3)
yes, this is a "the entire internet runs on Linux", so yes, this is an arena where they compete and come off worse. No wonder they are desperately trying to extend their monopoly onto the web server marketplace just like the desktop.
|
http://developers.slashdot.org/story/12/03/28/142228/microsoft-releases-aspnet-mvc-under-the-apache-license?utm_source=slashdot&utm_medium=twitter
|
CC-MAIN-2015-35
|
refinedweb
| 5,608
| 72.46
|
This section describes the following tasks:
To Secure the Example Service Application (STS)
To Create and Secure the STS (STS)
To Secure the Example Web Service Client Application (STS)
Another STS example application can be found at the following URL:.
The).
To create and secure a Security Token Service for this example, follow these steps.
Create a new project for the STS by selecting File->New Project.
Select Web, then Web Application, then Next.
Type MySTSProject for the Project Name, then click Finish.
Right-click the MySTSProject node, select New, then click File/Folder at the top.
Select Web Services from the Categories list.
Select Secure Token Service (STS) from the File Type(s) list, then click Next.
Type the name MySTS for the Web Service Class Name.
Select org.me.my.sts from the Package list, then click Finish.
The IDE takes a while to create the STS. When created, it appears under the project’s Web Services node as MySTSService, and MySTS.java appears in the right pane.
The STS wizard creates an empty implementation of the provider class. Implement the provider implementation class by copying the following code into the MySTS.java file.
Add these import statements to the list of imports:
import com.sun.xml.ws.security.trust.sts.BaseSTSImpl; import javax.annotation.Resource;;
Add the following Resource annotation after the line public class MySTS implements javax.xml.ws.Provider<Source> {:
@Resource protected WebServiceContext context;
Change the following line of code:
public class MySTS implements javax.xml.ws.Provider<Source>
to:
public class MySTS extends BaseSTSImpl implements javax.xml.ws.Provider<Source>
For the invoke method, replace the return null line with the following return statement:
return super.invoke(source);
Add the following method after the invoke method:
protected MessageContext getMessageContext() { MessageContext msgCtx = context.getMessageContext(); return msgCtx; }
Back in the Projects window, expand the MySTSProject node, then expand the Web Services node. Right-click the MySTSService[IMySTSService_Port] node and select Edit Web Service Attributes to configure the STS.
Select Secure Service if it’s not already selected.
Verify that the Security Mechanism of Username Authentication with Symmetric Keys is selected.
Select the Configure button. For Algorithm Suite, verify that Basic128 bit is selected (so that it matches the value selected for the service.) For the Key Size, verify that 128 is selected. Select OK to close the configuration dialog.
Select Act as Secure Token Service (STS), then click OK to close the Select STS Service Provider dialog.
Click the Keystore button to provide your keystore with the alias identifying the service certificate and private key. To do this, click the Load Aliases button, select wssip, then click OK to close the dialog.
Click OK to close the WSIT Configuration dialog.
A new file is added to the project. To view the WSIT configuration file, expand Web Pages->WEB-INF->wsdl->MySTS, then double-click the file MySTSService.wsdl. This file contains the sc:KeyStore element.
Right-click the MySTSProject tab, select Properties. Select the Run category, and type the following in the Relative URL field: /MySTSService?wsdl.
Run the Project (right-click the project and select Run Project).
The STS WSDL appears in the browser.
Follow the steps to secure the client application as described in To Secure the Example Web Service Client Application (STS).
This section demonstrates adding security to the CalculatorApplication’s web service client, which was secured using the security mechanism described in STS Issued Token.
To add security to the client,.
Drill down from CalculatorWSServletClient->Web Service References.
Right-click CalculatorWSService and select Edit Web Service Attributes, then select the WSIT Configuration tab.
Provide the client’s private key by pointing to an alias in the keystore. To do this, expand the Certificates node, click the Load Aliases button for the keystore, and select xws-security-client from the Alias list.
Provide the service’s certificate by pointing to an alias in the client truststore. To do this, from the Certificates node, click the Load Aliases button for the truststore and select xws-security-server from the Alias list.
Expand the Security Token Service node to provide details for the STS to be used. When the Endpoint and the Metadata values are the same, you only need to enter the Endpoint value. For the Endpoint field, type the following value:
Click OK to close this dialog.
The service requires a token to be issued from the STS at, with WSDL file. To do this, follow these steps:
Right-click the CalculatorWSServletClient node and select New->Web Service Client.
The New Web Service Client window appears.
Select the WSDL URL option.
Cut and paste the URL of the web service that you want the client to consume into the WSDL URL field. For example, here is the URL for the MySTS web service:
Type org.me.calculator.client.sts in the Package field, then click Finish.
The Projects window displays the new web service client.
Drill down from CalculatorWSServletClient->Web Service References.
Right-click MySTSService and select Edit Web Service Attributes.
Select the WSIT Configuration tab of the MySTSService dialog.
Provide the client’s private key by pointing to an alias in the keystore. To do this, expand the Certificates node, click the Load Aliases button for the keystore, and select xws-security-client from the Alias list.
Verify the STS’s certificate by pointing to an alias in the client truststore. To do this, from the Certificates node, click the Load Aliases button and select wssip from the Alias list.
Expand the Username Authentication node and verify that the default user name and password as specified in GlassFish.
If you followed the steps in Adding Users to GlassFish, this will be User Name wsitUser and Password changeit.
Click OK to close this dialog.
Compile and run this application by right-clicking the CalculatorWSServletClient project and selecting Run Project.
|
http://docs.oracle.com/cd/E19159-01/820-1072/6ncp48v61/index.html
|
CC-MAIN-2014-23
|
refinedweb
| 976
| 58.69
|
Be the first to know about new publications.Follow publisher Unfollow publisher CS-GroupC BSC
- Info
Spread the word.
Share this publication.
- Stack
Organize your favorites into stacks.
- Like
Like this publication.
Head First Java 2nd edition_split_2
ObjectOutput Streom (a choin stream) FileOutputStream (a connection stream) you are he re ~ 433 be at the mercy of the API designers. hoping they'd thought of everythitlgyou might ever want to do. But with chaining, you can patch together your own custom chains. 011010010110111001 serialization and file I/O object is flattened (serialized) object is written as bytes to is chained to is written to File
serialization and file I/O Pata lMoves itt streatMs frotM ot1e place to at1other. CO"t1ectio" strealMs represe)tt a eOt1t1ectlot1 toa source or destlt1aflot1 (file, socket etc.) while chal" streatMs cat1't oot1"ecf Ot1 their OW" 3"d tMust be chal"ed toa cot1t1ectfo" dna",. - The java I/O API has amnection streams, that represent connections to destinations and sources such as files or network sockets, and chain streams that work only if chained to other streams. Often, it takes at least two streams hooked together to do something useful-one to represent the connection and another to call methods on. \\Thy two? Because connection streams are usually too low-level. FileOutputStream (a connection stream), for example, has methods for writing byles. But we don 't want to write byusl We want to write objects, so we need a higher-level chain stream. OK, then why not have just a single stream that does exactly what you want? One that lets you write objects but underneath converts them to bytes? Think good 00. Each class does one thing well. File OurpurStreams write bytes to a file. ObjectOurputStreams turn objects into data that can be written to a stream. So we make a FileOutputStream that lets us write to a file, and we hook an ObjectOutputStrearn (a chain stream) on the end of it. When we call writeObject() on the ObjectOutputStream. the object gets pumped into the stream and then moves to the FileOutputStream where it ultimately gets written as bytes to a file. The ability to mix and match different combinations of connection and chain streams gives you tremendous flexibilityl If you were forced to use only a singlestream class, you'd be at the mercy of the API designers. hoping they'd thought of everythitlgyou might ever want to do. But with chaining, you can patch together your own custom chains. 01101001 object is flattened (serialized) is written to is chained to object is written as bytes to 011010010110111001 ObjectOutput Streom FileOutputStream (a choin stream) (a connection stream) 01101110 01 File you are he re ~ 433 serialized objects What really happens to a., object whet' itl serialized? o Object on the heap Objects on the. heap have state-the value of the object's instance variables. These values make one instance of a class different from another instance of the same class. G Object serialized Serialized objects save the values of the instance variables. so that an identical instance (object) can be brought back to life on the heap. 00100101 01000110 Foo myFoo = new Foo(); myFoo .setwidth(37); myFoo.setBeight(70); 434 chapter 14 serlallzatlon and file I/O - 'ut what exactly IS att object~ state? What needs to be saved? Now it starts to get interesting. Easy enough to save the primitive values 37 and 70. But what if an obj ect has an instance variable that's an object reference? What about an object that has five instance variables that are object references? What if those object instance variables themselves have instance variables? Think about it. What part of an object is potentially unique? Imagine what needs to be restored in order to get an object that's identical to the one that was saved . It will have a different memory location. of course. but we don't care about that, All we care about is that out there on the heap. we'll get an object that has the same state the object had when it was saved . Vi!iiJlj1 Iraill Barbell What has to happen for the Car object to be saved in such a way that It can be restored back to its original state? The Car object has two Instance variables that reference two other objects. Think of what-and how-you might need to save the Car. And what happens if an Engine object has a reference to a Carburator7 And what's inside the nre 0 array object? What doe8 It take to save a Car object? you are here. 435 serialized objects When an object is serialized, all the objects it refers to from Instance variables are also serialized. And all the objects those objects refer to are serialized. And all the objects those objects refer to are serialized••• and the best part is, it happens automaticallyl This Kennel object has a reference to a Dog [] array object. The Dog [] holds references to two Dog objects. Each Dog object holds a reference to a String and a Collar object. The String objects have a collection of characters and the Collar objects have an into When you save the Kennel, all of this ;s savedl - ~ /\ ·l:- ~9[ ) arrcri o'o~ 436 chapter 14 Serialization saves the entire object grapb.. All objects reterenced by instance variables, starting with the object being serialized. serialization and file 110 If you want your class to be serializable, itMpletMet1t Serializable The Serializable interface is known as a marker-or tag interface, because the interface doesn't have any methods to implement. Its sole purpose is to announce that the class implementing it is, well, serializable. In other words, objects of that type are saveable through the serialization mechanism. Ifany superclass of a class is serializable, the subclass is automatically serializable even if the subclass doesn't explicitly declare implements Serializable. (This is how interfaces always work. If your superclass "IS-A" Serializable, you are too). MIASI i",y\c",e'llt. ~ ~"eY O;,ot.s 'neY c r '\ t. Yu.f\t.i",c, objectOUtputStream. wri teObject (myBox) ; ~ 'h\C t:J( it. ...i\\ -tal a Sey\a\\~ , ' lhc ' a'la,io yat.\l.aO;,C. so ~'~\)\t 'S I'll" ,) import java. io. *; ~ SeYli e..d t.'nt i""'y~' '1011' 'II .. public class Box implements Serializable , ......... ~ ../ public void setWidth (int w) width = w; public void setHeight(int h} height = h; public static void main (String[] args) { = Box myBox new Box() ; myBox.setWidth(50}; myBox.setHeight(20); try { ~ 10 1 )) d \\~oo,seY l to i ~i\C YId.,.,.C I 楼.e a COY\'IItt.". ~ 'It. dot.s,", t. ",i lL eue...l.io\llS.! r .t. e'Jl.Ists路 \ d \\~oo路seY" . ovtV'abo"s t.a" "nYO'" ri"l' I I "c'" ~i\t YId"'c FileoutputStream fs = new FileOUtputStream("foo.ser") ; ObjectOUtputStream os = new ObjectOUtputStream(fs); os.writeObject(myBox); ~ Make ah Ob'etto os. close () ; ......... thaihed to :J "tp"tstrea", catch (Exception ex) { the tOhhettioh st ex.printSt&ckTrace(); Tell it to W\-'te 1 rea"" I l::he objett. you are here. 437 serialized objects Serialization is all or nothing. Can you imagine what would happen If some of the object's state didn't save correctly? Eeewww! That creeps me out just thinking about itl Like, what if a Dog comes back with rIO weight. Or no ears. Or the collar comes back size 3 instead of 30 . That just can't be allowedl Either the entire object graph is serialized correctly or serialization fails. import java.io .*; ~ POI'IO obj~eL public class Pond implements Serializable ( private Duck duck = new Duck(); ~ ea" " POl'ld has ont CIa ,.. D Ie. va~iable, a vl. , b~ so-ializ..ed. 'IN taTlee You can't serialize a Pond object if its Duck instance variable refuses to be senallzed (by not implementing Seriafizable). public static void llIain (Strinq[] arqs) { Pond myPond new Pond(); try ( FileOutputstreAm fs : new FileOUtputStream( "Pond.'ser"); ObjectOUtputStream os : new ObjeotOutputStream (fs) ; = . ..,e--. os . wr i t e Ob ) 8 c t (myPo n d ) ; os . close () ; catch(Exception ex) ( ex .printStackTrace() ; 438 ch ap te r 14 Wh . bl.yo.. . so-ia'i~ ""yPond ( P obJuV, ih Dlotk i>\lU a ol'ld aui:o-....a tit...i II c1.- . I'I~~ variable y 3 't-\ serlall~d. to YWI ~t ",ai" PO'PIO: . I" dass serialization and file 110 It's hopeless, then? I'm completely screwed if the idiot who wrote the class for my instance variable forgot to make it Serializable? Mark an Instance variable as transient If It can't (or shouldn't) be saved. If you want an instance variable to be skipped by the serialization process, mark the variable with the transient keyword. • import java.net.*: . t !o4i'r' "Otil'C. ~ class Chat implements Serializable { b-a.,.s~~iS ~aV"\a'o\tl O~~Il\~t.)I~ transient Strinq currentID; sa"t v ' . 'I>S~ n.'" J --~V"ia\i-z.abO"f\,~ ~ lI..SC'rNa....e variable be saved as pari } o.f ~t. obJt.d:'s st.dte dUY"'9 st.\"laliz.aiiol'l. Strinq userNamej / / more code will lfyou have an instance variable that can't be saved because it isn't serializable, you can mark that variable with the transient keyword and the serialization process will skip right over it. So why would a variable not be serializable? It could be that the class designer simply /ClTgot to make the class implement Serializable, Or it might be because the object relies on runtime-specific information that simply can't be saved. Although most things in the java class libraries are serializable, you can't save things like network connections, threads, or file objects. They're all dependent on (and specific to) a particular runtime 'experience'. In other words, they're instantiated in a way that's unique to a particular run of yoor program, on a particular platform, in a particular JVM. Once the program shuts down, there's no way to bring those things back to life in any meaningful way; they have to be created from scratch each time. you are here ~ 439 serialized objects therelm-~? DUmb ~uest19n8 Q: If serialization is so important, why isn't it the default for all classes? Why doesn't class Object implement Serializable, and then all subclasses will be automatically Serializable. A: Even though most classeswill, and should, implement Serializable, you always have a choice. And you must make a conscious decision on a c1ass-by-c1ass basis,for each class you design, to 'enable' serialization by implementing Serializable. First of all, if serialization were the default, how would you turn it off? Interfaces indicate functionality, not a lack of functionality, so the model of polymorphism wouldn't work correctly if you had to say/implements NonSerializable"to tell the world that you cannot be saved. A: Yes! If the class itself is extendable (i.e. not final), you can make a serializable subclass,and just substitute the subclass everywhere your code is expecting the superclass type. (Remember, polymorphism allows this.) Which brings up another interesting issue: what does it mean if the superclass is not serializable? Q: You brought it up: what does it mean to have a serializable subclass of a non-serializable superclass? A: Why would I ever write a class that wasn't serializable? First we have to look at what happens when a class is deserialized, (we'll talk-about that on the next few pages). In a nutshell, when an object is deserialized and its superclass is not serializable, the superclass constructor will run just as though a new object of that type were being created. If there's no decent reason for a class to not be serializable, making a serializable subclass might be a good solution. A: Q: Q: There are very few reasons, but you might, for example, have a security issue where you don't want a password object stored. Or you might have an object that makes no sense to save, because its key instance variables are themselves not serializable, so there's no useful way for you to make your class serializable. Q: If a class I'm using isn't serializable, but there's no good reason (except that the designer just forgot or was stupid), can I subclass the 'bad' class and make the subclass serializable? 440 cha pte r 14 Whoall just realized something big •••if you make a variable 'transient~this means the variable's value is skipped over during serialization. Then what happens to it? We solve the problem of having a non-serializable instance variable by making the instance variable transient, but don't we NEED that variable when the object is brought back to life? In other words, isn't the whole point of serialization to preserve an object's state? A: Yes, this is an issue,but fortunately there's a solution. If you serialize an object, a transient reference instance variable will be brought back as null, regardless of the value it had at the time it was saved.That means the entire object graph connected to that particular instance variable won't be saved.This could be bad, obviously, because you probably need a non-null value for that variable. You have two options: 1) When the object is brought back, reinitialize that null instance variable back to some default state. This works if your deserialized object isn't dependent on a particular value for that transient variable. In other words, it might be important that the Dog have a Collar, but perhaps all Collar objects are the same so it doesn't matter if you give the resurrected Dog a brand new Collar; nobody will know the difference. 2) If the value of the transient variable does matter (say, if the color and design of the transient Collar are unique for each Dog) then you need to save the key values of the Collar and use them when the Dog is brought back to essentially re-create a brand new Collar that's identical to the original. Q: What happens if two objects in the object graph are the same object? Like, if you have two different Cat objects in the Kennel, but both Cats have a reference to the same Owner object. Does the Owner get saved twice? I'm hoping not. A: Excellent question! Serialization is smart enough to know when two objects in the graph are the same.In that case,only one of the objects is saved,and during deserialization, any references to that single object are restored. serlallzatJon and file 1/0 Peserializatiot1= restorit1Q att object The whole point of serializing an object is so that you can restore it back to its original state at some later date, in a different 'run' of theJVM (which might not even be the same JVM that was running at the time the object was serialized). Deserialization is a lot like serialization in reverse. o e serializ.ed J,. Make a FilelnputStreQtn FilelnputStream fileStream = new ~ the objects Object one = os.readObject(); Object two = os.readObject(); Object three = os.readObject(); e -cast the objects you are here, 441 deserlallzlng objects What happe.,s durhtg deserializatio.,? When an object is deserialized, thejVM attempts to bring the object back to life by making a new object on the heap that has the same state the serialized object had at the time it was serialized. Well, except for the transient variables, which come back either null (for object references) or as default primitive values. 路 .\\ \his stty - :t 01101001 object is read as bytes 01101110 __- - - - ~ 011010010110111001 is read by 01 FiJeInputStream (0 connection stream) File -.II ~\"d'" d ~e dass. \ ea class is found and loaded, saved instance variables reassigned is chained to ObjectInputStream (a chain stream) o The object is read from the stream. e The determines (through stored with JVM info the serialized object) the object's class type. The JVM attempts t? find and load the object's class. If the JVM can't find and/or load the class, the JVM throws an exception and the deserialization fails . A new object is given space on the heap, but the serIalized obJeds constructor does NOT runl Obviously, if the constructor ran, it would restore the state of the object back to its original 'new' state, and that's not what we wont. We want the object to be restored to the state it had when it was serialized, not when it was first created. 442 chapter 14 . r ~e JVM ~'(O-.l ell' eCJW:I~ Object serialization and file I/O Gclasses, beginning with the first non-serializable one, will reinitialize their state. CD The object's instance variables are given the values from the serialized state. Transient variables are given a value of null for object references and defaults (0, false , etc.) for primitives. :t1ere1lU'H ? DUmb ~uestl9n8 0 Q: Why doesn't the class get saved as part of the ob" ject? That way you don't have the problem with whether the class can be found. A: Sure,they could have made serialization work that way. But what a tremendous waste and overhead. And while it might not be such a hardship when you're using serialization to write objects to a file on a local hard drive, serialization is also used to send objects over a network connection. If a class was bundled with each serialized (shippable) object, bandwidth would become a much larger problem than it already is. For objects serialized to ship over a network, though, there actually is a mechanism where the serialized object can be 'stamped' with a URl for where its class can be found. This is used in Java's Remote Method Invocation (RMI) so that you can send a serialized object as part of, say, a method argument, and if the JVM receiving the call doesn't have the class,it can use the URl to fetch the class from the network and load it, all automatically. (We'll talk about RMI in chapter 17.) Q: What about static variables? Are they serialized? A: Nope. Remember, static means "one per class"not "one per object" Static variables are not saved, and when an object is deserialized, it will have whatever static variable its class currentlyhas.The moral : don't make serializable objects dependent on a dynamically-changing static variable! It might not be the same when the object comes back. you are here ~ 443 serialization example Savit1g at1d restorit1g the gattte characters import java.io.*; public class GameSaverTest { Make sow-t tna~atte'rt-路 路 public static void main(String(] axgs} { GameCharacter one = new GameCharacter(50, "Elf" , new Strinq{] ("bow", "sword", "dust"}); GameCharacter two new GameCha.racter (200, "Troll", new String! 1 {"bare hands" J "big ax"}) ; new GameCharacter(120, "Magician", new String[l ("spells", "invisibility"}) ; GameCharacter three = = / / imagine code that does things with the characters that might change their state values try { ObjectOutputStream 09 =new ObjectOutputstream (new Fileoutputstream ("Game. ser") ) ; os.writeObject(one) ; os.writeQbject(two) ; os.writeQbject(three) i os. close () ; catch(IOExceptioD ex) ( ex.printStackTrace() ; } one = null; Wt stt tnt... tD ""II so '\lit un't two = null; ~ alUSS t.ht objttts 0l'I -\:he ht.af路 three null; = Now \'"tad tht... balK ~{ j" +\'"0'" the tilt... ~ Objectlnputstream is = new Objectlnputstream(new Filelnputstream ("Game. ser")) ; Gamecharacter oneRestore = (GameCharacter) is. readObject () ; GameCharacter twoRestore = (GameCharacter) is. readObject () ; GameCharacter threeRestore = (GameCharacter) is. readObject () ; System. out. println ("One' B type : \I + oneRestore. g8tType () ) ; Systern.au t .println ("Two's type : " + twoRestore. getType () ) ; System.out.println("Three's type: " + threeRestore .getTypeO) catch(Exception ex) ( ex .printstackTrace(); 444 ch apter 14 ~ j Chtt'tt. -to see i ~ .t. \ .... oVlt.t4路 serialization and file 110 fhe G-atl1eCharacfer class import java.io. t ; public class GameCharacter implements Serializable { int power; String type; String[] weapons; public GameCharacter(int p, String t, String[] w) { power =p; type =t; weapons =w; public int getPower () return power; public String getType() return type; public String getWeapons () { StringClic:k</a> 速 . The Web server (HTTP) gets the request and sends back a .Jnlp file (this Is NOT the JAR) . The .jnlp file is an XML document that states the name of the application's executable JAR file, Java Web Start (a small 'helper app' on the client) is started up by the browser. The JWS helper app reads -give me MyApp.jar" the .jnlp file, and asks the server for the MyApp.jar file. @ The Web server 'serves ' up the requested .jar file. Java Web Start gets the JAR and starts the oppUcation by calling the specified malnO method (just like an executable JAR). Next time the user wants to run this opp. he can open the Java Web Start application and from there launch your app. without even being online. 598 c hap te r 17 -- package, jars and deployment The .j"lp file To make aJava Web Start app, you need to Jnlp (Java Network Launch Protocol) file that describes your application. This is the file the JWS app reads and uses to find yourJAR and launch th e app (by calling theJAR's main 0 method) , A jnlp file is a simple XML document that has several different things you can put in, but as a minimum, it should look like this: ~ ye ~o~ s~et.l l~e seY"e~· , -tal"'s 'II ~ L~~ 'IS O'f\ , G10ase :\:,a.n- s'\:t so 'lie'ye v51'f\~10 \~e (.0 e y'fle S \ot.a\~OS I \" yOV' 'lie J 'II~eye 'f~\Ii -tnls O'f\ o'J'y "\1,1.0.0. . -tn\S 'fIO~\G ytsS ,.f ' e -\:.tS-\:."'V'~L .1, ado L L sey"eY , " v' eY ~ \oo~va"," ev 'fIev -t~e \oc.a S oy\ o'J'Y ·l~~:;o.\~s...a..t-(.o'" s-tayt,a~~ /1,.rIl'fl,'fIIi. <?:xml ~ . L \ <information> <title>kathy App</title> <vendor>Wickedly Smart</vendor> <homepaqe hre£="index.html"/> <description>Head First WebS tart demo</description> <icon hre£="kathys.qi£"/> . <o£fline-allowed/>>/ </in£ormation> . ~ n1 ya", withollt This ",eanS the lOSer tan Yl/in 'f0l/il'" r . ottl' e . ted ~ the internet. -t he lOSer I~ \n , ~t:~::;~~e a\Atomatit-l/ifdahn~ teahye wont work. <resources> This says that YOI/il'" al'l' needs lIel'"sion 1.3 <j2se version="l. 3+" /> ~ ot Jalla, oY ~reate\". <jar hre£="MyApp . jar" /> ~ n e tlatl'le 0+ oiher JAo ,. eXef·l<tah/ JAo/ y; < / resources> " 1"; es as w /I 1.L 1.e " . 010 tI'Iip'hi h ever. SOlltld d . e • <;nat: hold .I. ave s an ItI'Ia~es llSed b Outer (. asses 0\'" YYOll\" apr. <application-desc t: </jnlp> ~ This is like the tI'Iainfest Main-Class ent\"'1'" it says whith zlass in the JAR has the llIairD llIethod. you are here . 599 deploying with JWS Steps for making and deploying a Java Web Start app 0 Make an executable JAR for your application. MyApp.Jar ~ Write a .Jnlp file . MyApp.jnlp 速 @) Place your JAR and .jnlp files on your Web server. Add a new mime type to your Web server. application/x-java-jnlp-nle This causes the server to send the .jnlp file with the correct header, so that when the browser receives the Jnlp file it knows what it is and knows to start the JWS helper app. Create Q Web page with a link to your .jnlp file .......... ~ I~ ~--: MyJWSApp.hbnl <HTML> <BODY> <8 href= '~App2.jnlpH>Launch My <I BODY> </HTML> 600 chapler 17 Application</a> package, jars and deployment, ,. LOOk at the sequence of events below, and p Iacethem in the order In which they 2. occur In a JWS application. 3. __-~--:::;':::-;;;-v !heWeb server sends a JAR \he Web browser starts up theJWS helper app 4. lileto!he JWS helper app 5, the Web server sends tiJe to the browser user dicks a Web page Itn the JWS helper app invokes the JAR's malnO method 6. er requests a .jnlpfile from the Web server 7. dlim~estJons Q.: How IsJava Web Start dl~rent from an applet1 A: Applets can't live outside of a Web browser. An applet is downloaded from the Web as part of a Web page rather than simply from a Web page. In other words, to the browser, the applet IsJust like a JPEG or any other resource. The browser uses either a Java plug-In or the browser's own built-In Java (far less common today) to run the applet. Applets don't have the same level of functionality for things such as automatic updating, and they must always be launched from the browser. With JWS applications, once they're downloaded from the Web, the user doesn't even have to be usIng a browser to relaunch the application locally. Instead, the user can start up the JWS helper app, and use It to launch the already-downloaded application again. Q..: A: , - - BULlO POINTS - • Java Web Start technology lets you deploy a stand-alone client application from the Web. • Java Web Start includes a 'helper app' that must be installed on the client (along with Java). • AJava Web Start (JWS) app has two pieces: an executable JAR and a .jnlp file. • A .jnlp file is a simple XML document that describes your JWS application. It includes tags for specifying the name and location ofthe JAR, and the name ofthe class with the mainO method. • When a browser gets a .jnlp file from the server (because the user clicked on a link tothe .jnlp file), the browser starts upthe JWS helper app. • The JWS helper app reads the .jnlp file and requests the executable JAR from the Web server. • When the JWS gets the JAR, it invokes the mainO method (specified in the .jnlp file). What are the security restrictions of JWS1 JWS apps have several limitations Including being restricted from reading and writing to the user's hard drive. But... JWS has Its own API with a special open and save dialog box so that, with the user's permission, your app can save and read Its own flies In a speclaI, restricted area of th e user's drive. you a re he re ~ 601 exercise: True or False We explored packaging, deployment, and JWS in this chapter. Your job is to decide whether each of the following statements is true or false. 1. The Java compiler has a flag, -d, that lets you decide where your .class files should go . 2. AJAR is a standard directory where your .class files should reside. 3. When creating a Java Archive you must create a file called jar.mf. 4. The supporting file in a Java Archive declares which class has the main() method. 5. JAR files must be unzipped before the jVM can use the classes inside. 6. At the command line, Java Archives are invoked using the -arch flag. 7. Package structures are meaningfully represented using hierarchies. 8. Using your company's domain name is not recommended when naming packages. 9. Different classes within a source file can belong to different packages. 10. When compiling classes in a package, the -p flag is highly recommended. 11. When compiling classes in a package, the full name must mirror the directory tree. 12. Judicious use of the -d flag can help to assure that there are no typos in your class tree. 13. Extracting aJAR with packages will create a directory called meta-info 14. Extracting aJAR with packages will create a file called manifest.mf. 15. TheJWS helper app always runs in conjunction with a browser. 16. JWS applications require a .nlp (Network Launch Protocol) file to work properly. 17. AJWS's main method is specified in itsJAR file. 602 chapter 17 package, jars and deployment ~U1Il1Ilar)'-Or~ss 7-.0 Anything in the book Isfair game for this onel Aaoss Down 6. Won'ttravel 26. Mine is unique 1. Pushy widgets 16.Who's allowed 30. 110 cleanup 9. Don't split me 27. GUl's target 2. __ of my desire 19. Efficiency expert 31. MUll-nap 10. Release-able 29. Java team 3. 'Abandoned'moniker 20. Early exit 34. Trig method 11. Got the key 30. Factory 4. A chunk 21. Commonwrapper 36. Encaps method 12. VOgang 32. Fora while 5. Math not trlg 23. Yes or no 38. JNLP format 15. Flatten 33. Atomic It 8 6. Be brave 24. Java jackets 39.VB's final 17. Encapsulated returner 35. Good asnew 7. Arrangewell 26. Not behavior 40. Java branch 18. Shipthis one 37. Pairsevent 8. Swing slang 28. Socket's suite 21 . MakeIt so 41. Wheredo I start 11. I/O canals 22. I/O sieve 42 A little firewall 13.Organized release 25. Diskleaf 14. Not for an instance you are here ~ 603 exercise solutions ,. 2. 3. the Web server sends a.jnlp file 10 the browser 4. the Web browser starts up the JWS helper app 5. 604 \ \he NJS helper app requests '\ lheJARfl\e 6. I the Web server sends a JAR Lli/e 10 the JWS helper app 7. I the JWS helper app Invokes the JAR's mal nO method I I True 1. The Java compiler has a flag, -d, that lets you decide where your .class files should go. False False True False False 2. AJAR is a standard directory where your .class files shouJd reside. 3. When creating a Java Archive you must create a file called jar.mf, 4. The supporting file in aJava Archive declares which class has the mainO method. 5. JAR files must be unzipped before thejVM can use the classes inside. 6. At the command line, Java Archives are invoked using the -arch flag. True 7. Package structures are meaningfully represented using hierarchies. False False False True 8. Using your company's domain name is not recommended when naming packages. True 12. Judicious use of the -d flag can help to assure that there are no typos in your tree. True 13. Extracting aJAR with packages will create a directory called meta-inf, True 14. Extracting aJAR with packages will create a file called manifestmf. False False False 15. TheJWS helper app always runs in conjunction with a browser. 9. Different classes within a source file can belong to different packages. 10. When compiling classes in a package, the -p flag is highly recommended. II. When compiling classes in a package. the full name must mirror the directory tree. 16. ]WS applications require a .nlp (Network Launch Protocol) file to work properly. 17. A]WS 's main method is specified in itsJAR file. chap t er 1 7 ~UtIltIlary-Or~55 t. O you are here ~ 605 18 remote deployment w it h RMI Distributed Computing Being remote doesn't have to be a bad thing. Sure, things areeasieT when all the parts of your application are in one place, in one heap, with one JVM to rule them all. But that's not always possible. Or desirable. What If your application handles powerful computations, but the end-users are on a wimpy little Java-enabled device? What if your app needs data from a database, but for security reasons, only code on your server can access the database? ImagIne a big e-commerce back-end, that has to run within a transaction-management system? Sometimes, part of your app must run on a server, while another part (usually a client) must run on a different machine. In this chapter, we'll learn to use Java'samazingly sImple Remote Method Invocation (RMI) technology. We'll also take a quick peek at Servlets, Enterprise Java Beans (EJB) ,and JInl, and look at the ways In wh Ich EJB and J Inl depend on RMI. We'll end the book by writi ng one of the coolest thi ng s you can make In Java,a universal servicebrowser. th is is a new ch apter 807 how many heaps? .. RMlapp Combination 100% Local Method calls are always betweeK two objects OK the sattte heap. So far in this book, every method we've invoked has been on an object running in the same virtual machine as the caller. In other words, the calling object and the callee (the object we're invoking the method on) live on the same heap. class Foo { void go () Bar b = new Bar(); b.doStuff() j public static void main (String [] args) Faa f = new Foa()j f .go () ; In the code above, we know that the Foo instance referenced byfand the Bar object referenced by bare both on the same heap, run by the sameJVM. Remember, the jVM is responsible for stuffing bits into the reference variable that represent how to get to an object on the heap. TheJVM always knows where each object is, and how La get to it. But the JVM Can know about references on only its own heap! You can 't, for example, have aJVM running on one machine knowing about the heap space ofaJVM running on a different machine. In fact, aJVM running on one machine can't know anything about a differentjVM running on the same machine. It makes no difference if theJVMs are on the same or different physical machines; it matters only that the twoJVMs are, well, two different invocations of the JVM. 608 cha pte r 18 ( In most applications, when one object calls a method on another, both objects are on the same heap. In other words, both are running within the same JVM. remote deployment with RMI What ifyou wattt to ittvoke 8 'Method Ott att object rutttthtg Ott another tH8chitte? We know how to get information from one machine to anotherwith Sockets and I/O. We open a Socket connection to another machine, and get an Ourputfitream and write some data to it. But what if we actually want to call a method on something running in another machine... another JVM? Of course we could always build our own protocol, and when you send data to a ServerSocket the server could parse it, figure out what you meant, do the work, and send back the result on another stream. What a paia, though. Think how much nicer it would be to just get a reference to the object on ~ ~t.1 lovtS the other machine, and call a method. yO-.lC"hl, ~I.. ... ~n 'b\C) t.o t....I禄't.n "V' ~ t.O\t.~abo'f\S~. ..... Imagine two computers... I __'--'- ~ JVM ~ r ---' CJ~r.=a~~~..::::15 L-' '--- I CJ l-J C-.l L-J L..J L--o; l . . \l 路 U 'I - -" Little Big Big has something Little wants. C~ute-l0~er. Little wants to send some data to Big, so that Big can do the heavy ~ruting. Little wants simply to call a method... double doCalcUsingDatahase(CalcNumbers numbers) and get Lack the result. But how can Little get a relerence to an object on Big? you are here' 609 two objects, two heaps Object AI rut,.1i.,g 0., Little, wa.,ts to call a tttethod 0., Object ~ ru.,.,it1g Ot1 Jig. The question is, how do we get an object on one machine (which means a different heap/]VM) to call a method on another machine? doCalcUsi ngDatabase.O ~~~ return value - - ~ e-. .,,~- ~ Jut you can't do that. Well, not directly anyway. You can't get a reference to something on another heap. If you say: Dog d = ??? Whatever d is referencing must be in the same heap space as the code running the statement, But imagine you want to design something that will use Sockets and I/O to communicate your intention (a method invocation on an object running on another machine), yet still feel as though you were making a local method call. In other words. you want to cause a method invocation on a remote object (i.e., an object in a heap somewhere else) J but with code that lets you pretend that you're invoking a method on a local object. The ease ofa plain old everyday method call, but the power of remote method invocation. That's our goal. That's what RM1 (Remote Method Invocation) gives you! But let 's step back and imagine how you would design R.i\1I if you were doing it yourself. Understanding what you'd have to build yourself will help you learn how RMl works . 610 chapter 18 .- ...... '- . -.11 _'-- , -, remote deployment with RMI A design for remote method calls Create four things: server, client, server helper, client helper • Create client and server apps. The server app is the remote service that has an object with the method that the client wants to invoke. Client heap Create client and server 'helpers'. They'll handle 011 the low-level networking and I/O details so your client ond service can pretend like they're in the same heap. Server heap you are he re> 611 client and server helpers fhe role of the 'helpers" The 'helpers' are the objects that actually do the communicating. They make it possible for the client to ad as though its calling a method on a local object. In fact, it is. The client calls a method on the client helper, as if the clienthelper were the actual service. The client helper is a frrorj for the Real Thing. In other words, the client object thinks it's calling a method on the remote service, because the client helper is pretendingto be th e service obj eel. Pretendingto be the thingwith the met/wd the client wants 10 call! But the client helper isn't really the remote service. Although the client helper ruts like it (because it has the same method that the service is advertising), the client helper doesn't have any of the actual method logic the client is expecting. Instead, the client helper contacts the server, transfers information about the method call (e.g., name of the method, arguments, etc .) , and waits for a return from the server. Your client object gets to act llke it's making remote method calls. But what it's r!!llY doing is calling methods on a heap-local 'proxY object that handles all the low-level details of Sockets and streams. On the server side, the service helper receives the request from the client helper (through a Socket connection), unpacks the information about the call, and then invokes the real method on the real service object. So to the service object, the call is local. It's coming from the service helper, not a remote client. The service helper gets the return value from the service. packs it up, and ships it back (over a Socket's output stream) to the client helper. The client helper unpacks the information and returns the value to the client object. Server heap 612 chapte r 18 remote deployment with RMI How the method call happens • Client object calls doBigThingO on the client helper object Se~r • Client helper packages up information about the call (arguments, method name, etc.) and ships it over the network to the service helper . Client heap • heap I Server heap "client wonts to call a method" Service helper unpacks the information from the client helper, finds out which method to call (and on which object) and invokes the real method on the real service ob] ect. - Client heap - I "client wants to coli a method" you are here ~ 613 RMI helper objects Java RMI gives you the cliet1t at1d service helper objects! In Java, RMJ builds the client and service helper objects for you, and it even knows how to make the client helper look like th e Real Service. In other words. RMI knows how to give the client helper object the same methods you want to call on the remote service. Plus, RMI provides all the runtime infrastructure to make it work, including a lookup service so that the client can find and get the client helper (the proxy for the Real Service). With RMI. you don 't write any of the networking or I/O code yourself. The client gets to call remote methods (i.e. the ones the Real Service has) just like normal method calls on objects running in the client's own 10ca1JVM. Almost. There is one difference between RMI calls and local (normal) method calls. Remember that even though to the client it looks like the method call is local, the client helper sends the method call across the network. So there is networking and I/O . And what do we know about networking and I/O methods? They throw exceptions allover the place. So, the client does have to acknowledge the risk. The client has to acknowledge that when it calls a remote method. even though to the client it's just a local call to the proxy/helper object, the call ultimaJely involves Sockets and streams. The client's original call is local, but the pro"]' turns it into a remote call. A remote call JUSt means a method that's invoked on an object on another JVM. HQ11J the information about that call gets transferred from oneJVM to another depends on the protocol used by the helper objects. With RMI. you have a choice of protocols: JRMP or IIOP,JRMP is RMJ's 'native' protocol, the one made just for Java-ta-Java remote calls . nop, on the other hand. is the protocol for CORBA (Common Object Request Broker Architecture), and lets you make remote calls on things which aren't necessarily Java objects. CORBA is usually much more painful than RMI, because if you don't have Java on both ends, there's an awful lot of translation and conversion that has to happen. But thankfully, all we care about isJava-to-Java, so we're sticking with plain old. remarkably easy RMl. They're risky! In RMI, the client helper is a 'stub' and the server helper is a 'skeleton'. .. 614 chapter 18 Client heap Server heap remote deployment with RMI Making the Remote Service This is an overview of the five steps for making the remote service (that runs on the server). Don't worry, each step is explained in detail over the next few pages. Server D!! Step one: Make a Remote Interface The remote interface defines the methods that a client can call remotely. It's what the client will use as the polymorphic class type for your service. Both the Stub and actual service will implement this! Step two: Make a Remote Implementation This is the class that does the Real Work. It has the real implementation of the remote methods defined in the remote interface. It's the object that the client ~~~~~~. Step three: Generate the stubs and skeletons using rmic These are the client and server 'helpers' . You don't have to create these classes or ever look at the source code that generates them. It's all handled automatically when you run the rmic tool that ships with yourJava development kit, MyRemotelmpl_Stub.class 101101 Jtl 1...10' II HI 001 Ol ebOt." MyRemotelmpLSkel.class Step four: Start the RMI registry (rmiregistry) The nniregistry is Ji ke the white pages of a phone book. It 's where the user goes to gel the proxy (the client stubv'helper object). Step five: Start the remote service You have to get the service object up and running. Your service implementation class instantiates an instance of the service and registers it with the RMI registry. Registering it makes the service available for clients. you are he re . 615 a remote interface Step one: Make a Remote Interface MyRemote.java Extend java.rmLRemote Remote is a 'mar ker ' interface, which means it has no methods. It has special meaning for RMI, though, so you must follow this rule. Notice that we say 'extends' here. One interface is allowed to extend another interface. public interface MyRemote Declare that all methods throw a RemoteException The remote interface is the one the client uses as the polymorphic type for the service. In other words, the client invokes methods on something that implements the remote interface. That something is the stub, of course, and since the stub is doing networking and I/O, all kinds of Bad Things can happen. The client has to acknowledge the risks by handling or declaring the remote exceptions. If the methods in an interface declare exceptions, any code calling methods on a reference of that type (the interface type) must handle or declare the exceptions. t , ~ . . ' av,H'",i ' import java. rmi . * ; ~ th~ R~",oh '"n;c;Y ate'S ,,, J public interface MyRemote extends Remote { public String sayHello() throws } Be sure arguments and return values are primitives or Serializable Arguments and return values of a remote method must be either primitive or Serializable, Think about it. Any argument to a remote method has to be packaged up and shipped across the network, and that's done through Serialization. Same thing with return values. If you use primitives, Strings, and the majority of types in the API (including arrays and collections), you'll be fine. If you are passing around your own types ,just be sure that you make your classes implement Serializable. public sayHello() throws RemoteException; " ' - This r-et;...rn vc1/~e ~onna is be shirred over ~he wire +\'"0'" the serve\" batk to the ~"ent, so it lrIl.tSt be Se\"ializable. That s how ar~s and \"et~rn valf.tt:s ad:; ratka~ed ~r and Sent , J remote deployment with RMI -- Step two: Make a Remote Implementation 1 ~ ....... ";i _ - • Implement the Remote interface 0 MyRemotelmpl.java Your service has to implement the remote interface-the one with the methods your client is going to call. public class MyRemoteImpl extends UnicastRemoteObject public String sayHello() { ~ return "Server says, 'Hey'''; Th~ t~n-riler will n-ake SlAre that } } • { rIA lie In-r1en-el'lteo all the n-ethods / / more code in class t~.om the intel-fate '/OIA ir..r1er..el'lt. Il'I IS tase, there s ol'lly Ohe. Extend UnicastRemoteObject In order to work as a remote service object, your object needs some functionality related to 'being remote'. The simplest way is to extend UnicastRemoteObject (from the java.rmi.server package) and let that class (your superclass) do the work for you. public class MyRemoteImpl • implements MyRemote { Write a no-arg constructor that declares a RemoteException Your new superclass, UnicastRemoteObject, has one little problem-its constructor throws a RemoteException. The only way to deal with this is to declare a constructor for your remote implementation,just so that you have a place to declare the RemoteException. Remember, when a class is instantiated, its superclass constructor is always called. If your superclass constructor throws an exception, you have no choice but to declare that y'our constructor also throws an exception. ,/o\A dOl'l't halle to thO 'l'l 1 t the t.01'lS rv.t., ~hat "OIAr slAfert.lass wa" to det. are " I L, t.~trv.t.tor throws al'l euefvOTI. - P ubl i c U.rRemoteImpl ( ) 6"6:t • L yv-".al'l~ I:i ~ toY- YoIA JV-S" l'Ie..~ Register the service with the RMI registry Now that you've got a remote service, you have to make it available to remote clients. You do this by instantiating it and putting it into the RMI registry (which must be running or this line of code fails). When you register the implementation object, the RMI system actually puts the stub in the registry, since that's what the client really needs. Register your service using the static rebind() method ofthejava.rmi.Naming class. . M",e (that t.I\el'ts tal' ~t try ( q\ve ~o\Ar serv.lt.et~ ~\str~) al'd re~\ster It Remote service new RemoteI 1 () ; ~ \otnit.R~IIl'Ire~\:t:;. W'hel'l ~o\A b.\"dthe the = wltn e . t. RMI sways t.he serv,u o'r .,- servit.e obJe~_, Lhe stlAb \l' the re~istr~· '---'" stlAb al'd flA';> " ~ you are here. 617 Step three: generate stubs and skeletons G) Run rmic on the remote implementation class (not the remote: Interface) The rrnic tool, that comes with the java software development kit, takes a service implementation and creates two new classes. the stub and the skeleton. It uses a naming convention that is the name of your remote implementation, with either _Stub or _Skeleton added to the end . There are other options with rmic, including not generating skeletons, seeing what the source code for these classes looked like, and even using llOP as the protocol. The way we're doing it here is the way you'll usually do it. The classes will land in the current directory (i.e. whatever you did a cd to). Remember, rmic must be able to see your implementation class, so you'll probably run rmic from the directory where your remote implementation is. (We're deliberately not using packages here, to make it simpler. In the Real World, you'll need to account for package directory structures and fully-qualified names). Step four: run rmiregistry (1) Bring up a terminal and start the rmlre:gistry. Be sure you start it from a directory that has access to your classes. The simplest way is to start it from your 'classes' directory. Step five: start the service CD Bring up another terminal and start your service This might be from a main 0 method in your remote implementation class. or from a separate launcher class. In this simple example. we put the starter code in the implementation class, in a main method that instantiates the object and registers it with RMl registry. 618 chapter 18 MyRemotelmpl_Stub.class lClnO). 10 liO I o U 6 001 \ 0 OO~ 0) MyRemotelmpCSkel.class remote deployment with RMI Complete code for the server side The Remote interface: The Remote service (the implementation): public MyReIDoteImpl () throws RemoteException { I <c> public static void main (String[] args) ( try ( MyRemote service new MyR8moteImpl();~ Naming.rebind("Remote Hella H I service); catch (Exception ex) { ~ M.ike the rt....oU ob路 t I , I ex.printStacltTrace () ; 'r"'irt4ish-y'd路 1., :Je~"1.~e" b"'d = itb to -the _I"~ Ult S'td{;H':. N路 '''?,..... e i"dO. The e9;su ..- it:. lA"d . "ted to look it. "d 路 1., t"r r~ he 1I<1"'e tliel'lts '011/1 --r tr. Ule r"" re9'shy J J\d"'t yOIA l- t. you ar e here ~ 619 getting the stub How does the client getthe stub~? The client has co get the stub object, since that's the thing the client will call methods on. And that's where the RMI registry comes in. The client does a 'lookup', like going to the white pages ofa phone book, and essentially says, "Here's a name, and I'd like the stub that goes with that name." loolcllpO is a statit l\'ltihod the Na",i~ tlass ~Remote) o-t \t Naming.lookup("rmi://127.0.0 .1/Remote Bello"); l' \ '1011 hil'lt to t..ut it. to tht i"u...tate, siYlle t.he loolc'-r ...ethod ret.'-Yl'I$ t'ffe Objett.. ~ 'f0\l.'" host. l\il",e 0'" /P addytsS ~oes heye Server • Client does Q lookup on the RMI registry Naminq.lookup("rm1://127.0.0.1/Ramot:8 Bello"); • RMI registry returns the stub object (as the return value of the lookup method) and RMI deserializes the stub automatically, You MUST have the stub class (that rmic generated for you) on the client or the stub won 't be deserialized, • Client Invokes Q method on the stub, as though the stub IS the real service 620 chapter 18 remote deployment wit h RMI How does the client get the stub class? Now we get to the interesting question. Somehow, someway, the client must have the stub class (that you gene rated earlier using rmic) at the time the client does the lookup, or else the stub won 't be deserialized on the client and the whole thing blows up. In a simple system, you can simply h and-deliver th e stub class to the client. There's a much cooler way, though, although it's beyond the scop e of this book. But just in case you're interested, the cooler way is called "dynamic class downloading". With dynamic class downloading, a stub obj ect (or really any Serialized object) is 'stamped' with a URL that tells the RMI system on the client where to find the class file for that object. Then, in th e process of deserializing an object, ifRMI can't find the class locally, it uses that URL to do an HTTP Get to retrieve the class file. So you 'd need a simple Web server to serve up class files, and you'd also need to change some security parameters on the client. There are a few other tricky issues with dynamic class downloading, but that's the overview. Complete client code import . . *. Java.rnu.. , I ~ The Na... ih9 dass (.f < re...ire3ist r look J'ava 路r ...,. patkage Y . c: ~01n9 the lop) IS'n t he public class MyRemoteClient { public static void main (String[] args) new MyRemoteClient() .go(); public void go () try { MyRemote service String s r: = \I- J. t ht V't~isty~ as ~'/Yt \t t.o.,.,es Ol/.t 't ~芦~tt t ht t.as O'oy:.t.t, so don (MyRemote) = service . sayHello () ; It System.out.println(s); \ catch (Exception ex) { ex. printStackTrace (); Naming .lookup(~rmi://127.0.0.1/Remote Y d 1-. 010 nee "Ule or hosw...e 7 IP address Hello"); '\ L- a c1 t.he "artie ~ed VI VI I L ' d Lhe stV'vit.t ~Vld V'CUly\ ~ It looks j lASt like a I 9'" ~r old "'cthod tall! (E'lt.tept it Re...oteE'lt.teptio~1AS at now/edse the 1 you are here ; 621 RMI class files Je sure each lMachhte has the class files it "eeds. The top three things programmers do wrong with RMI are: 1) Forget to start rmiregistry before starting remote service (when you register the service using Nammg.rebindt), the nniregistry must be running!) 2) Forget to make arguments and return types serializable (you won't know until runtime; this is not something the compiler will detect.) 3) Forget to give the stub class to the client. ,gclient tilt zhe..+' uses the i"te~~att to tall Methods <禄> the st.",b. Tnt clie"t JVM "eeas the si",b tla~, b~ the tliO'lt "eveY rete",s to the shb class i.- toae. Tbe tlit"t. always v.ses the ve... o{:e i"ktate, as thou5h the l"t"'ote iPlttrk.lte WERE. t.he atb.al yC"'oU: objttt. Do,,'t +cn-5tt, llUJln, lIilUO""l o U 10UIn10 UO' o 11 0 CI O<Il1G1 0(111011 0CI110 00\ 01 Cllentclass MyRemotelmpLStub.class MyRemotelmpl,class MyRemotelmpLStub.class lQt~OI lOU01., l~ to HoO 1. 6 11 0 ~ OGt 1\ I) ~~l 10 UtOl H.I) I L~ "'" .t MyRemotelmpl_Skel.class MyR9mote.class SeYVer "eeds both the stub a..a Skeld:.o" dasses, as ""ell as the seYviu a.-t:I the l"C"'otL i"u,-tau. It. MeaS the st.Lt'o tlass ~t.alAU rC"'-t'"bcr, the st.",b is s",bst.i+'"iea to\" the veal service, wh(l\ the veal w-vice is bOWla +.0 the RMI . . c5isttY路 622 chapter 18 remote deployment with RMI ~n yoor penCil W:~$J~ 1. Look at the sequence of events below, and place them in the order in which they occur in a Java RMI application. 2. 3. The client gets Ihe slub from -The--SlU-b-se-n-;ds-:th~e::--:m::e:;\h;;:od~ Ihe RMI regIstry call to the server The diem invokes 8'm on the stub s. The clIent does a looku the RMI Registry The RMI registry Issta 4. 6. ate service (remote implementation) isinstantiated 7. ,---- IUUR POINTS~ • An object on one heap cannot geta normal Java reference 10 an object on a different heap (which means running on a different NM) • Your remote service should extend UnicastRemoleObject. (Technically there are other ways 10 create a remote object, but extending UnicaslRemoteObject is the simplest). • Java Remote Method Invocation (RMI) makes it seem like you're calting a method on a remote object (l.e. an object in a different JVM), but you aren't • Your remote service class must have a constructor, and the constructor must declare a RemoleException (because the superclass constructor declares one). • When a client calls a method on a remote object, the client is really calling a method on a proxy of the remote object. The proxy is called a 'stub'. • Your remote service must beinstantiated, and the object registered with the RMI registry. • • A stub is a client helper object that takes care of the lowlevel networking details (sockets, streams, serialization, etc) bypackaging and sending method calls to the server. To register a remote service, use the static Naming.rebind('Service Name", servicelnstance); • The RMI registry must be running on the same machine asthe remote service, before you try 10 register a remote object with the RMI registry. • The client looks up your remote service using the static Naming.lookup{'rmi:/IMyHostNameJServiceName"); • Almost everything related to RMI can throw a RemoteException (checked bythe compiler). This Includes registering orlooking upa service In the relgstry, and a/lremote method calls from the client tothe stub. • To build a remote service (in other words, an object that a remote client can ultimately call methods on), you must start with a remote Interface. • Aremote Inlerface must extend the java.rml.Remote Interface. and all methods must declare RemoteException. • Your remote service implements your remote Interface. you are here. 623 usesfor RMI Yeah, but who really uses RMI? I use it for serious B-to-B, e-commerce backends, running on J2EE I technology. 624 chapter 18 remote deployment w ith RMI 100% Local Combination 100% Remote What about Servlets1 Servlets are Java programs that run on (and with) an HTfP web server. When a dient uses a web browser to interact with a web page, a request is sent back to the web server. If the request needs the help of a Java servler, the web server runs (or calls, if the servlet is already running) the servlet code. Servlet code is simply code that runs on the server, to do work as a result of whatever the client requests (for example, save information to a text file or database on the server). If you're familiar with eGl scripts written in Perl, you know exactly what we're talking about. Web developers use CGJ scripts or servlets to do everything from sending user-submitted info to a database, to running a web-site's discussion board. And even serolets can use RMI/ By far, the most common use ofJ2EE technology is to mix servlets and EJBs together, where servlets are the client of the EJB. And in that case, theserulet is using RJ1IlI to talk to theEJBs. (Although the way you use RM1 with EJB is a little different from the process wejust looked at.) 1 Client fills out a registration form and clicks 'submit', The HTTP server (i.e. web server) gets the request, sees that it's for a servlet, and sends the request to the servlet, Web Browser (client) -client requests RegisterServlet" Web Server Servlet (Java code) runs, adds data to the database, composes a web page (with custom info) and sends it back to the client where it路 displays in the browser. Web Browser (client) "client requests Register5ervlet" Web Server ................ ........ "heres a confirmation page" conflnn.html you are here. 625 very simple servlet Step for making and running a servlet G) Web Sel"Ver Find out where your servlets need to be placed. For these examples, we'll assume that you already have a web server up and running, and that it's already configured to support servlets, The most important thing is to find out exactly where your servlet class files have to be placed in order for your server to 'see' them. If you have a web site hosted by an ISP, the hosting service can tell you where to put your servlets.just as they'll tell you where to place your eGI scripts. ® Get the servlets.jar and add It to your c10sspath Servlets aren't part of the standard Java libraries; you need the servlets classes packaged into the servlets.jar file. You can download the servlets classes fromjava .sun.com, or you can get them from yourJava-enabled web server (like Apache Tomcat, at the apache.org site). Without these classes, you won't be able to compile your servlets. ® servlals.Jar Write a servlet closs by extending HttpServlet UU Ot; U~ 16 o A servlet isjust aJava class that extends HttpServlet (from the javax.servlet.http package). There are other types ofservlers you can make, but most of the time we care only about HttpServlet. I U 0 Cl61 1C1 Ql)l OI MyServletA.cJass public class MyServletA extends Htt:pServlet { ... } @ ~ WrIte an HTML page that invokes your servlet When the user clicks a link that references your servlet, the web server will find the servlet and invoke the appropriate method depending on the HITP command (GET, POST. etc.) MyPage,html <a href="aervlets/MyServletA">This is the most amazing servlet.</a> @ Web Server Make your servlet and HTML page Qvallable to your server This is completely dependent on your web server (and more specifically, on which version ofJava Servlets that you're using). Your ISP may simply tell you to drop it into a "Servlets" d irectory on your web site. But if you're using, say, the latest version of Tomcat. you'll have a lot more work to do to get the servlet (and web page) into the right location. (We just happen to have a book on this too .) 626 chapter 18 1-, " ~. ~l " ~ ' I, I ~. \" - .J I' '-- :1·.·. " ,I •. ; , -:::-,' - .' : ,. :',- - servlets and JSP ? " Dtherel&rH ume ~uestl9ns • Servlets are Java classes that run entirely on (and/or within) an HTIP(web) server. • Servlets are useful for running code on the server as a result ofclient interaction witha web page. For example, if a client submits information ina web page form, the servlet can process the information, add it toa database, and send back a customized, confirmation response page. • To compile a servlet, you need the servlet packages which are in the servlets.jar file. The servlet classes are not part ofthe Java standard libraries, so you need todownload the servlets. jarfrom java.sun.com orget them from a servletcapable web server. (Note: the Servlet library is included with the Java 2 Enterprise Edition (J2EE)) • To run a servlet, you must have a web server capable ofrunning servlets, such as the Tomcat server from apache.org. • Your servlet must be placed ina location that's specific to your particular web server, so you'll need to find that out before you trytorun your servlets. If you have a web site hosted byan ISP that supports servlets, the ISP will tell you which directory to place your servlets in. • Atypical servlet extends HttpServlet and overrides one ormore servlet methods, such as doGetO ordoPostO. • The web server starts the servlet and calls the appropriate method (doGetO, etc.) based on the client's request. • The servlet can send back a response by getting a PrintWriter output stream from the response parameter ofthe doGetO method. • The servlet 'writes' out an HTML page, complete with tags). 628 chapte r 18 Q.: What's a JSP, and how does it relate to servlets7 A: JSP stands for Java Server Pages. In the end, the web server turns a JSP into a servlet, but the difference between a servlet and a JSP is what YOU (the developer) actually create. With a servlet, you write a Java class that contains HTML in the output statements (if you're sending back an HTML page to the client). But with a JSP, it's the opposite-you write an HTML page that contains Java code! This gives you the ability to have dynamic web pages where you write the page as a normal HTML page, except you embed Java code (and other tags that"trigger" Java code at runtime) that gets processed at runtime. In other words, part of the page is customized at runtime when the Java code runs . The main benefit of JSP over regular servlets is that it's just a lot easier to write the HTML part of a servlet as a JSP page than to write HTML in the torturous print out statements in the servlet's response. Imagine a reasonably complex HTML page, and now imagine formatting it within println statements. Yikes! But for many applications, it isn't necessary to use JSPs because the servlet doesn't need to send a dynamic response, or the HTML is simple enough not to be such a big pain. And, there are still many web servers out there that support servlets but do not support JSPs, so you're stuck. Another benefit of JSPs is that you can separate the work by having the Java developers write the servlets and the web page developers write the JSPs. That's the promised benefit, anyway. In reality, there's still a Java learning curve (and a tag learning curve) for anyone writing a JSP, so to think that an HTML web page designer can bang out JSPs is not realist ic. Well, not without tools. But that's the good news-authoring tools are starting to appear, that help web page des igners create JSPs without writing the code from scratch. Q.: Is this all you're gonna say about servlets? After such a huge thing on RMI7 A: Yes. RMI is part of the Java language, and all the classes for RMI are in the standard libraries. Servlets and JSPs are not part of the Java language; they're considered standard extensions. You can run RMI on any modern JVM, but Servlets and JSPs require a properly configured web server with a servlet "container': This is our way of saying, "it's beyond the scope of this book." But you can read much more in the lovely Head First Servlets & JSp' remote deployment with RMI Just for fUt1, lets tMake the Phrase-O-Matic work as aservlet Now that we told you that we won't say any more about servlets, we can't resist servletizing (yes, we am verbify it) the Phrase--O-Matic from chapter 1. A servlet is stilljustJava. AndJava code can call Java code from other classes. So a servlet is free to call a method on the Phrase-O-Matic. All you have to do is drop the Phrase-O-Matic class into the same directory as your servlet, and you 're in business. (The Phrase-OMatie code is on the next page) . Try my new web-enabled phrase-o-matic and you'll be a slick talker just like the boss or those guys in marketing. import java .io.* ; import javax.servlet .- ; import javax.servlet .http .*; public class KathyServlet extends BttpServlet { public void doGet (HttpServletRequest request, BttpServletResponse response) throws ServletException, IOException Stringmake another phrase</a></p>H); out.println ("</BODY></BTML>") ; H ) ; ~ out. close () ; you are here ~ 629 Phrase-O-Matic code Phrase-O-Matic code, servlet-friet1dly This is a slightly different version from the code in chapter one. In the original, we ran the entire thing in a mainO method, and we had to rerun the program each time to generate a new phrase at the command-line. In this version, the code simply returns a String (with the phrase) when you invoke the static makePhrase 0 method. That way, you can call the method from any other code and get back a String with the randomly-composed phrase. Please note that these long String[] array assignments are a victim of wordprocessing here-don't type in the hyphens! Just keep on typing and let your code editor do the wrapping. And whatever you do, don't hit the return key in the middle of a String (i.e. something between double quotes). public class PhraseOMatic { public static String makePhrase() II make three sets of words to choose from String[] wordListOne = {"24/7","multi-Tier","30,OOO foot","B-to-B","win-win","frontend", "web-based" ,"pervasive" , "smart", "six-sigma","critical-path", "dynamic"}; String[] wordListTwo = {"empowered", "sticky", "valued-added", "oriented", "centric", "distributed", "clustered", "branded","outside-the-box", "positioned", "networked", "focused", "leveraged", "aligned", "targeted", "shared", "cooperative", "accelerated"}; String[] wordListThree = {"process", "tipping point", "solution", "architecture", "core competency", "strategy", "mindshare", "portal", "space", "vision", "paradigm", "mission"}; II find out how many words are in each list int oneLength = wordListOne.length; int twoLength = wordListTwo.length; int threeLength wordListThree.length; II generate int randl int rand2 int rand3 = three (int) (int) (int) random numbers, to pull random words from each list (Math.randomO * oneLength); (Math .randomO * twoLength); (Math.randomO * threeLength); II now build a phrase String phrase = wordListOne[randl] + " " + wordListTwo[rand2] + " " + wordListThree[rand3] ; II now return it return ("What we need is a " + phrase); 630 ch apt er 18 remote deployment with RMI An EJB server adds abunch of services that you don't get with straight RMI. Things RMI is great for writing and running remote services. But you wouldn't run something like an Amazon or eBay on RMI alone. For a large, deadly serious, enterprise application, you need something more. You need something that can handle transactions, heavy concurrency issues (like a gazillion people are hitting your server at once to buy those organic dog kibbles), security (not just anyone should hit your payroll database), and data management. For that, you need an enterprise applil;atUm server. like transactions, security. concurrency. database management. and networ"king. An EJB server steps into the middle of an RMI call and layers in all ofthe services. In java, that means a java 2 Enterprise Edition 02EE) server. AJ2EE server includes both a web server and an Enterprise JavaBeans(EJB) server, so that you can deploy an application that includes both servlets and EJBs. Like servlets, EJE is way beyond the scope of this book, and there's no way to show "just a little" EJE example with code, but we wiU take a quick look at how it works. (For a much more detailed treatment ofEJE, we can recommend the lively Head First EJB certification study guide.) EJ'B server:.-_---\-,-.- ..... '. This is only a small paM of the E,JB picture.! you are here ~ 831 a little J InI For our final trick... a little Jini We love jini. We think]ini is pretty much the best thing in Java. IfE]B is RMI on steroids (with a bunch of managers), Jini is RMI with wings. PureJava bliss. Like the EJB material, we can't get into any of the Jini details here, but if you know RMI, you're three-quarters of the way there. In terms of technology, anyway, In terms of mindset, it's time to make a big leap. No, it's time to fl>'. Jini uses RMI (although other protocols can be involved), but gives you a few key features including: Adaptive discovery Self-healing networks With RMT, remember, the client has to know the name and location of the remote service. The client code for the lookup includes the IP address or hostnarne of the remote service (because that's where the RMI registry is running) and the logical name the service was registered under. But withjini, the client has to know only one thing: the interface implemented by the service! Tha t 's it. So how do you find things? The trick revolves aroundjini lookup services. jini lookup services are far more powerfuJ and flexible than the RMI registry. For one thing,jini lookup services announce themselves to the network, automatically. When a lookup service comes online, it sends a message (using IP multicast) out to the network saying, "I'm here, if anyone's interested." But that's not all. Let's say you (a client) come online after the lookup service has already announced itself, you can send a message to the entire network saying, "Are there any lookup services out there?" Except that you're not really interested in the lookup service itself-you're interested in the services that are registered with the lookup service. Things like RMI remote services, other serializable java objects, and even devices such as printers, cameras, and coffeemakers. And here's where it gets even more fun: when a service comes online, it will dynamically discover (and register itself with) any Jini lookup services on the network. When the service registers with the lookup service, the service sends a serialized object to be placed in the lookup service. That serialized object can be a stub to an RMT remote service, a driver for a networked device, or even the whole service itself that (once you get it from the lookup service) runs locally on your machine. And instead of registering by name, the service registers by the interfacei: implements. Once you (the client) have a reference to a lookup service, you can say to that lookup service, "Hey, do you have anything that implements Scienuficf'alculator?" At that point, the lookup service will check its list of registered interfaces. and assuming it finds a match, says back to you, "Yes I do have something that implements that interface. Here's the serialized object the ScientificCalculator service registered with me." 632 chapter 18 remote deployment with RMI Adaptive discovery i" actio" @) Jini lookup service is launched somewhere on the network, and announces itself using IP multicast. machIne on the network somewhere ... • An already-running Jini service on another machine asks to be registered with this newly-announced lookup service . It registers by capability, rather than by name. In other words, it registers as the service interlace it implements. It sends a serialized object to be placed in the lookup service. machine on the network somewhere... another machine on the network Register me os something that implements ScientificCaJculator. Here's a serialized object that represents my service . Send it to anybody who asks... another machine on the network you are he re . 633 adaptive discovery in Jini Adaptive discovery itt actio", co"thtued... 速 A client on the network wants something that implements the ScientificColculator interface. It has no idea where (or if) that thing exists, so it asks the lookup service. machine on the network somewhere... @ another machine on the network The lookup service responds, since it does have something registered as a ScientificCalculator interface. machine on the network somewhere ... 634 another machine on the network ch apter 18 another machine on the network remote deployment with RMI Self.. healit1Q t1etwork it1 actio" e A Jini Service has asked to register with the lookup service. The lookup service responds with a "leese". The newly-registered service must keep renewing the lease, or the lookup service assumes the service has gone offline. The lookup service wants always to present an accurate picture to the rest of the network about which services are available. another machine on the network machine on the network somewhere .. , 速 another machine on the network The service goes off line (somebody shuts it down), so it fails to renew its lease with the lookup service. The lookup service drops it. machine on the network somewhere... another machine on the network you are here ~ 635 universal service project Fhtal Project: the Ut1iversal Service browser We're going to make something that isn't jini-enabled, but quite easily could be. It will give you the flavor and feeling ofJini, but using straight RMI. In fact the main difference between our application and aJini application is how the service is discovered. Instead of the Jini lookup service, which automatically announces itself and lives anywhere on the network, we're using the RMI registry which must be on the same machine as the remote service, and which does not announce itself automatically. And instead of our service registering itself automatically with the lookup service, we have to register it in the RMl registry (using Naming.rebind()). But once the client has found the service in the RMI registry, the rest of the application is almost identical to the way we'd do it inJini. (The main thing missing is the kasethat would let us have a self-healing network ifany of the services go down .) The universal service browser is like a specialized web browser, except instead of HTML pages, the service browser downloads and displays interactive Java GUIs that we're calling uniuersal services. RMIBrowser Choose a ~Ite k'f'ol'l\ the list The RMI TC"'oU: W"\IiU has a ~~\IiteList.O ...cthod -th.it ~N:Js batK this list. o-t SCT\litcs. WheJI +.hc \!Sa' ~Ieth OYIC, -the tl, eJI t. asks .f 0\'" the ad;\oIG1 scrvitc cDil.eRolli,,~ Da'l0fneWccKJ e+.tJ to be ~t batlt. ~'f'0ft\ the RMI yt"'oU: 636 chapter 18 W"\/itc. remote deployment w ith RMI How it works: CI ient starts up and does a lookup on the RMI registry for the service called "Service.Server", and gets back the stub. Server Service Browser (client) Client calls getServiceListO on the stub. The ServiceServer returns an array of services ServIce Browser (client) - "getServiceListO¡ - Server ~== "OK, here's an array of services" • Client displays the list of services in a GUI Service Browser (client) Server you are here. 631 universal service browser How it works, continued ... • User selects from the list, so client calls the get5erviceO method on the remote service. The remote service returns a serialized object that is an actual service that will run inside the client browser. Server 'getService(selectedSl/ct 'OK, here 's the service- • Client calls the getGuiPanelO on the serialized service object it just got from the remote service. The GUI for that service is displayed inside the browser, and the user can interact with it locally. At this point , we don't need the remote service unless/until the user decides to select another service. Service Browser (client) 638 chapter 18 remote deployment with RMI The classes and interfaces: Servlc9Server interface Servlce.Server Implements Remote A regular old RMI remote interlace for the remote service (the remote service has the method for getting the service list and returning a selected service). • gelServicesUslO gelServlceO class ServlceServerImpl Implements 5ervlceServer ServlceServerlmpl The actual RMI remote service (extends UnicastRemoteObject). Its job is to instantiate and store a/l the services (the things that will be shipped to the client), and reg ister the server itself (ServiceServerImpJ) with the RMI registry. gelServicesListO gelServiceO 3' class ServlceBrowser ServlceBrowser The client. It builds a very simple GUI, does a lookup in the RMI registry to get the Service5erver stub, then calls a remote method on it to get the list of services to display in the GUI list . mainO Interface Service This is the key to everything. This very simple interface has just one method, getGuiPanelO . Every service thot gets shipped over to the client must implement this interface. This is what makes the whole thing UNIVERSAL! By implementing this int e rface , a service can come over even though the client has no ideo what the actual class (or classes) are that make up that service. All the client knows is that whatever comes over, it implements the Service interface, so it MUST hove a getGuiPanelO method. The client gets a serialized object as a result of calling getService(selectedSvc) on the Service5erver stub, and all the client says to that object is, "r don't know who or what you are, but I DO know that you implement the Service interface, so I know I can call getGuiPanelO on you. And s ince getGuiPanelO returns a JPanel , rll just slap it into the browser GUI and start interacting with itl S&lVlce getGuiPaneJO ~ , , ,, , , ,, , / class MlnlMusicService implements Service , " '" ., .. ,, ... ,, ... ,, I I I . ,, .. I .... .. r - - - - - -... . .... gelGulPanelO , ,, Remember that fabulous little 'music video' program from the first GUI Code Kitchen? We've turned it into a service, and you can play it r----'-----~ DayOfTheWeekServlce over and over and ove.r until your roommates finally leave. gelGuiPanelO • " : DiceServlce class Dic£5ervice implements Service Got dice? If not , but you need some, use this service to roll anywhere from 1 to 6 virtual dice. for you. . . .. ., .. .. ... . MlnlMuslcServlce gelGulPanelO class DayOfTheWukSuviu Implements Suvlce Were you born on a Friday? Type in your birthday and find out . you are here. 639 universal service code interface ServiceServer (the remote interface) import java.rmi.*; public interface ServiceServer extends Remote ( Object [] getServiceList () throws RemoteException; Service getService(Object serviceKey) throws RemoteException; public interface Service extends Serializable { public JPanel getGuiPanel(); 640 chapter 18 remote deployment with RN ass ServiceServerlmpl (the remote Implementation) java.rm.i.*; java.util.*; java. rm.i. server. • ; \ A \"\O'I'",a RM\ 'l"'f\tl'l\tn~bOt'l public class ServiceServerImpl extends UnicastRemoteObject implements ServiceServer BashMap serviceList; public ServiceServerImpl () throws RemoteException I setUpServices() ; . 'b aliu tht ad:.u.al ) ~1..d.D'r i~ taIled, '~I . M~itS~:r.il{,t., dt.. V/'h(.Yt the t. . CDit.tStY~iU) M,nI private void setUpServices () ( . _"\ ~e.YV~t.es ...nIV(;,'~ serviceList = new BashMap(); service.List.put("Dice Rolling Service", new DiceService(); servlceList. put ("Day of the Week Service" I new DayOfTheWHkService () ) ; serviceLlst.put("Visual Music Service", new MiniMusicService()); L\..6 KyVit.e.~ (the. MaKe. V'" t t\olc.'" ~ public Object£] getServiceList () { te") . t l ('" S ys tern . ou t . ~rl.n. n an r6ltlO; return servJ..ceLJ..st.keySet() ,toArray(); Client. t.alts .1, . .u at~~I\..K"""1 '' ''1v> o'oy:tb) a~ flo' sh"',,,~ ~as\olMa~J WIth a the. 'KC'/ ). v>c. r I\il .... e (-to'!" . Q....L /• .L f' display in the"UlJSbroIn oraet- f.o ;)Cl: a Ib~ O't s~viles f.o ~ d r wsa- (s.o t.he lI.S~ tan ~1.1. ) h iln array o-t f. e Ob' t e... l; one We ~ il'lSiae) by "'d kin9 ~:f .(eva. th~h ii: has S&-il'l~s j .. the lf~lhN!ap. We ,'Y JlI.St the i(EYS t.h~f. a"e J W<»It lCha ~n ~lt."~1 C' - • I LAn eLl; f.he diePlt asks .for ;1 b . -.. ~.,.v'te objai l; y l.dJlln~ gef.StrvileO. a!:rrd public static void main (Strinq[] arqa) { tty { Naminq.rebind("ServiceServer", new Servic:eServerImpl(»: catch(Exception ex) ( ex,printStackTraoe() ; } Syst&m.out.println("Ramote sarvice is running"); you are he re . 64 ServlceBrowser code class ServlceBrowser (the client) import java.awt.·i import javax. swing. * i ~rt java.rmi.*i import java.awt .event.*; public class SarviceBrowsar JPanel mainPanel; JComboBox serviceList; ServlceServer server; public void buildGUI () ( JFrame frame = new JFrame(~RMI Browser H ) ; mainPanel D new JPanel(); frame .geteontentPane () . add (BorderLayout.CEN'rEll, mainPanel); -e'nis Method .~ lQOku.\,' o~s -e'n~ \~~iteL.irtO . ~j.ot[] ~i~~.M;:dd:'~:~::::~~t;~\_~ •• servlceList = new JCombo80x (services) ; JCorr-bo~ (the \~~. n~ cJ eat'n thi~ ',,, the oYTd'1' Mav-e disylcl'1cl\'le Sh,~ fr8llMi. getcontentpane () . add (BorderLayout.NOR'l'B, serviceList); serviceList.addActionListener(new MyListL1stener(»; frame.setSize(SOO,500) ; frama.setVisible(true) ; 642 chapter 18 remote deployment w it h RMI Object[] getServicesList() { Object obj = null; Object[] services = null ; try { obj = Naming .lookup(~rmi://127 .0.0.1/ServiceServer"); ) catch(Exception ex) { ex .printStackTrace() ; } server = ~ (ServiceServer) obj; Cast the stlAb to th so that we tall tall e ~".o~e illtertate type ~e ervlteListO Oil it ' try { services = server.getServiceList() ; catch(Exception ex) { ex.printStackTrace() ; return services; ~ __ ~d:.set'viteLis路W ~ives lAS the at't'ay 0+ Objetts, -Chat we display in the JCoMboBo'/C. tOt' Hie lASet' to selett tt'OI'l\. class MyListListener implements ActionListener { public void actionPerformed(ActionEvent ev) { public static void main(String[] args) new ServiceBrowser() .buildGUI(); } you are here. 643 DiceService code class DiceService (a universal service, implements Service) eee lUIl l flD'Vrltt - Dlullolll"9_. import javax.swing.·; import java.awt.event.*; import java.io.*; -~ '= - fT.1Wl SH public class DiceService implements Service ( JLabel label; JComboBox numOfDice; public JPanel getGuiPanel() ( JPanel panel = new JPanel () ; JButton button = new JButton("Roll 'em!"); String[] choices = {"I", "2", "3", "4", "5 H } ; numOfDice = new JComboBox(ohoices) ; label new JLabel ("dice values here"); button.addActionListener(new Rol1EmListener(»; panel. add (numOfDice) ; ftt:r' th . panel. add (button) ; St:r:i: i f..t:C»Il'",POl"idl'rt ",tthod! nt: ",eihod 0+ tht: panel. add (label) ; thil S~i~t: : a~e--;_ v'e 07It: t.he dibrt's 901\1Id ~II whtrl return panel; ..... a t ·.1' SZ}t and loaded. rAJ, u" do whdie = ua y tne ~t:u:jlliPa"eJO eihod I vty y04l. JPal'u~" $(I it bioi/as tht: at.t.:~1 d" J as II~n~ ~s you retur" a l\ In - public class RollEmListenar implements ActionListener { publiC! void aotionPerfo:rmed(ActionEvent ev) { / I roll the dice String diceOutput = ""; String selection = (String) numOfDice . get5electedItem () : int numOfDiceToRoll = Integer.parselnt(selection) ; for (int i 0; i < numOfDiceToRoll; i++) ( int r (int) ((Math. random () * 6) + 1); diceoutput += (" " + r); = Ilt-ro,~ '-JUt. = ) label. setText (diceOUtput) ; ~arpen your penCil Think about waysto improve the DiceService. One suggestion:using what you learned In the GUI chapters, makethe dice 9raphicaI.Use a rectangle, and draw the appropriate number of circles on each one,correspondIng to the roll for that particular die. 644 chapter 18 1---1 •• remote deployment with RMI class MlnlMusicService (a universal service, implements Service) ............. 1'i.~.o.!~6~ ~=~"'~~~==;:~ ~w... import j avax . sound. midi ... ; import java.io.*; import javax.sving.*; illlport java.art.*; illlport java.awt.event .*; r I /'t..\\ public clau MiniMusicService implements Service { MyDrawPanel myPanel; -n... ~.,.\iLt,~ ",rth06. L __ d'sola'ol does's '\ I In<.. il °rt 6 D~~ a" . lot (..,httr( t.he ,h·a\Oli,,~ ~'f"'I' tu.a\\~ public JPanel getGuiPanel () ( ta~lt.S ..,tIl C\l~ JPanel mainPanEll = new JPanel () ; thC "'~\.edJ myPanel = new MyDrawPanel () ; ~ ya," . JButton playltButton = new JButton("Play it N ) ; playItButton.addActionLiatener(ne. PlayItLlstener()); mainPanel.lldd(myPanel) ; mainPanel.lldd(playItButton); return mainPanel; publio class PlayItListener implements ActionLiatener publie void actionPerfo~(ActionEventev) ( try ( Sequencer sequencer = MidiSystam. qetSequencer () ; sequencer.open(); sequencer.addControllerEventLiatener(myPanel, new lnt[] (127»); Sequence seq = new sequence (Sequence. PPQ, 4); Track track = s&q.creataTrack() ; for (lnt i = 0; i < 100; i+= 4) { lnt rNum = (int) «(Math.random() * 50) + 1); if (rNum < 39) ( II so now only do it if num <38 (15\ of the time) track.add(makeEvent(144,l,rNum,lOO,l») ; track.add(makeEvent(116,l,121,O,i)); track.add(makeEvent(128,l,rNum,lOO,1 + 2»; ) II end loop sequencer .aetSequence(seq); sequencer.start(); sequencer.setTampoInBPM(220) ; catch (Exception ex) (ax.printstackTrace();} } 1/ close actionperfor=ed ) II close inner class you are here. 645 MiniMuslcServlce code class MiniMusicService, continued••. public MidiEvent makeEvent(int comel, int chan, int one, int two, int tick) { MidiEvent event = null; try ( ShortMessage a = new ShortMessage(); a.setMessage(comel, chan, one, two); event = new MidiEvent(a, tick); }catch(Exception e) { } return event; class MyDrawPanel extends JPanel implements ControllerEventListener II only if we got an event do we want to paint boolean mag = false; public void controlChange(ShortMessage event) { msg = true; repaint 0 ; public Dimension getPreferredSize() return new Dimension(300,300); public void paintComponent(Graphics g) { if (msg) { y = (int) = (int) «Math.randomO * 40) + 10); «Math.random() * 40) + 10); g .fillRect(x,y,ht, width); mag = false; } II close i f } II close method } II close inner class II close class remote deployment with RMI class DayOfTheWeekService (a universal service, Implements Service) import import import import import import ,.e 9:" javax.swing .-; java .awt .event .·; java .awt .* ; java.lo.·; java.util.* ; java. text . • : ............ D.rroln...... s-w.. -.. , .... 0., IF .... j l OOI ~ ... ~I . ~ ~.., I public class DayOfTheWeekServicEl implements Service ( JLabel outpu t:Labe1 ; JComboBox month; JTextFleld day; J'l'extField year; public. JPanel qetGuiPanel () ( JPanel panel "" new JPanel () ; JButton button"" new JButton(~Do it!"); button.addAetionListener(new DoltListener(»; outputLabel. "" new .JLabel ("date appearll he.ra ") ; DateFormatsymbols dataStuff = new DateFonaatsymbols () ; month new JComboBox(dateStuff.g8tMOnthll{»; day"" new JTextField(8); year ;: new J'l'extField (8) : JPanel inputpanel ;: new JPanel (new Gr iciLayout (3 ,2) ) ; inputPane1.add(new JLabel ("Month") ; inputpanel .add(month); inpu tPanel . add (new JLabel ("Day") ) ; inputPanel.add(day); inputPanel. add {new JLabel ("Year"» ; inputPanel.ac1d(year) ; panel. add (inputpanel) : panel .add(button); panel.add(outputLabel); return panel; = public class DoItLlstener implements A.ctionLilltener J O\l "etd Cl ....e....i"dev public void aotionPerformed(ActionEvent &V) ( R ~O" to t'hayUY 10 -\:.t:'1 ~~t.t,,~ ""o'r\t.~ 6 int monthNum == month.getselectedIndexO; ; h "",,'olY a",d d t. ho"" e"'lY, lnt dayNum • Integer .paraeInt (day. qatTextO ); ~e is t\'~"'t\~ ~\; ~\assthe int yearNum :: Integer. parselnt (year. getTaxt () ) ; \) 11 u.st. it. lASt> the Ca IoU 1 tt.i~'1 a fott.ern Calendar 0 • Calendar. qetInst&nce () ; et.4\tDa~f~t \~d \"~,,t ",,-l:., c • set (Calendar. MONTH J monthN\DJl): ~""'f~ t,nt f c. set (Calendar. DAY_OF_MONTH I dayNum); CIf c. set (Calendar. YEAR, yearNum): Date date :: c. qetTime () ; String dayOfWeelt ;: (new SimplaDataFor:JUt ("EEEE format (date) : outputLabel . setText (dayOfW&elt) ; n\ f H Aho. dw t"'' ' N ) ) • you are here ~ 647 the end ... sort of Congratulations! You made it to the end. Of course, there's stili the two appendices. And the Index. And then there's the web site••• There's no escape, really. 648 hapter 18 Appendix A: * Final Code Kitchen * Chris: groove2 revised NIgel: dance beat FinaLLy, the complete version 01 the BeatBox! It connects to a simple MusicServer so that you can send and receive beat patterns with other clients. this is a new appendix 649 final BeatBox code Final BeatBox client program Most of this code is the same as the code from the CodeKitchens in the previous chapters, so we don't annotate the whole thing again. The new parts include: GUI - two new components are added for the text area that displays incoming messages (actually a scrolling list) and the text field . NETWORKING - just like the SimpleChatClient in this chapter, the BeatBox now connects to the server and gets an input and output stream. THREADS - again, just like the SimpleChatClient, we start a 'reader' class that keeps looking for incoming messages from the server. But instead of just text, the messages coming in include TWO objects: the String message and the serialized ArrayList (the thing that holds the state of all the checkboxes.) import import import import import import import import java.awt.* ; javax .swing .*; java.io.*; javax. sound. midi . *; java.util .*; java.awt.event.*; java.net.*; javax.swing .event.*; public class BeatBoxFinal JFrame theFrame; JPanel mainPanel ; JList incomingList ; JTextField userMessage; ArrayList<JCheckBox> checkboxList; int nextNum; Vector<String> listVector = new Vector<String>() ; String userName; ObjectOUtputStream out; ObjectInputStream in; HashMap<String, boolean[]> otherSeqsMap new HashMap<String, boolean[]>(); Sequencer sequencer; Sequence sequence; Sequence mySequence = null; Track track; String[] instrumentNames = {~Bass Drum", ~Closed Hi-Hat", ~Open Hi-Hat","Acoustic Snare", ~Crash Cymbal", ~Hand Clap", ~High Tom", ~ H i Bongo", ~Maracas", ~Whistle", ~Low Conga", ~Cowbell", ~Vibraslap", ~Low-mid Tom", ~High Aqogo", ~Open Hi Conga"); int[] instruments 650 appendi x A = {35,42,46,38,49,39,50,60,70,72,64,56,58,47,67 ,63); appendix A Fina l Code Kitchen public static void main (String[] args) ( new BeatBoxFinal() .startUp(args[O]); II args[O] is your user ID/screen name fI\ l .f \J0~V- stv-eeYl Yla",e· ""----- AdO. a t..o",....aYlo-liYle av-~~",eYl1;. ov- I public void startup(String name) { c...'/.cl....yIe·' 01ID j'ava 0eatBo'l-f=iYlal thef=lash [} userName = name; II open connection to the server try ( Socket sock = new Socket("l27 .0 .0.l", 4242); out = new ObjectOutputStream(sock.getOUtputStream(»; in = new ObjectInputStream(sock.getInputStream(»; Thread remote = new Thread(new RemoteReader(»; remote . start 0 ; catch(Exception ex) ( System.out.println("couldn't connect - you'll have to play alone .") ; } setupMidi 0 ; buildGUI () ; II close s ta rtUp ~lA.1 tode, YlothiYl~ Yle-." hev-e public void buildGUI () theFrame = new JFrame("Cyber BeatBox"); BorderLayout layout = new BorderLayout() ; JPanel background = new JPanel(layout) ; background.setBorder(BorderFactory.createEmptyBorder(10,10,10,10»; checkboxList = new ArrayList<JCheckBox>() ; Box buttonBox = new Box(BoxLayout.Y_AXIS); JButton start = new JButton("Start"); start.addActionListener(new MyStartListener(»; buttonBox.add(start); JButton stop = new JButton("Stop") ; stop.addActionListener(new MyStopListener(»; buttonBox.add(stop); JButton upTempo = new JButton("Tempo Up"); upTempo.addActionListener(new MyUpTempoListener(»; buttonBox.add(upTempo); JButton downTempo = new JButton("Tempo Down") ; downTempo.addActionListener(new MyDownTempoListener(»; buttonBox.add(downTempo) ; JButton sendlt = new JButton("sendIt"); sendlt.addActionListener(new MySendListener(»; buttonBox.add(sendIt); userMessage = new JTextField(); you are here ~ 651 final BeatBox code buttonBox.add(userMessage): Box namaBox ::; new Box (BoxLayout. 'i_AXIS) i for (int i = 0; i < 16; i++) { nameBox. aeld(naw Label (i.nst%UmentNames [i]Âť ; background.. add (Bord.erLayout . EAST , buttonBox) i background.. add (Bord.erLayou t. WEST, nameBox); theFrame. qetContentpane () . add. (background) ; GridLayout grid'" new GridLayout(16,16); grid. aetVgap (1) ; grid.setHqap(2): mainPanel '" new .Ji'anel (grid) : background. add (BorderLayout. CENTER, UlA1nPanel); for (int i '" 0; i < 256; i++) ( JCheckBox c '" new JCheck.Box () ; c .setse1ected(false); checkboxList.add.(c) ; DIlU.nPanel. add. (c) ; II end loop theFrAmQ.aetBounds(50,50,300,300); theFrAmQ .pack () ; theFrame.setVisible(true); II close buildGUI public void. setUpMidi () try ( sequencer IS MidiSystem .getSequencer () i sequencer.open(); sequence IS new sequence (Saquence.PPQ,4) ; track '" sequance.areateTrack(); sequencer.setTempolnBPM(120); catch (Exception e) (e .printstackTrace () ;) II close setUpMldi 652 appendix. A appendix A Final Code Kitche n public void buildTrackAndStart() { ArrayList<Integer> trackList = null ; II this will hold the instruments for each sequence .deleteTrack(track); h the t..het.k'ooi-tS track = sequence.createTrack(); '0 'fIo\ki\'\~ thY"o~ t t to a\'\ for (int i trackList = 0; i < 16; i++) { = new ArrayList<Integer>(); ?J",M 0 kO,t.kSJtel oYlO .,..aYYMi~~ ,chl/~Ylt koY" l to «jtt "flt C\'\0 ",okiYl~ the .o. L . 1~1r;,. ACTL.:'! oSi" i\'\stv-",,,,e\'\t ~. • o\e1'o, b",t It IS ~)< L oY"el/iol.lS . et,,\1 ..0"', L Y"eteY" "IP , T\oils 'S yY" ' r . t.hay"eY"s, so L' a~aiYl. . the yyel/lOI.lS L LL f' "'\\ ei-y\aYlo"IOYl 'fIoS 1\'\ • to ~e" "fie -t Code~ltt..he\'lS for (int j = 0 ; j < 16; j++) { JCheckBox jc = (JCheckBox) checkboxList.get(j + (16*i»; if (jc.isSelected(» ( int key = instruments[i]; trackList.add(new Integer(key»; else { trackList.add(null); II because this slot should be empty in the track } } II c lose inner l oo p makeTracks{trackList); } II c los e o ute r l oop track.add(makeEvent{192,9,1,0,15»; II - so we always go to full 16 beats try { sequencer.setSequence(sequence); sequencer . setLoopCount (sequencer. LOOP_CONTINUOUSLY) ; sequencer .start(); sequencer.setTempolnBPM{120); } catch(Exception e) {e.printStackTrace();} } II close me tho d public class MyStartListener implements ActionListener public void actionPerformed(ActionEvent a) { buildTrackAndStart() ; } II c lose actionPe r f o rmed II close inner cl as s public class MyStopListener implements ActionListener public void actionPerformed(ActionEvent a) ( sequencer . stop () ; } II c l o se a c t ionPerformed II close inne r c lass public class MyUpTempoListener implements ActionListener public void actionPerformed(ActionEvent a) { float tempoFactor = sequencer.getTempoFactor(); sequencer.setTempoFactor«float) (tempoFactor * 1.03»; } II close act i o nPer f o rmed II c l o se inner c l ass you are here . 653 final BeatBox code public class MyDownTempoListener ~lements ActionListener public void actionPerformed(ActionEvent a) { fioat tempoFactor = sequencer .getTempoFactor(); sequencer.setTempoFactor«fioat) (tempoFactor * .97»; public class MySendListener implements ActionListener ( public void actionPerformed(ActionEvent a) { II make an arraylist of just the STATE of the checkboxes boolean[] checkboxState = new boolean[256] ; for (int i = 0; i < 256; i++) { JCheckBox check = (JCheckBox) if (check.isSelected(» { checkboxState[i] = true; } } II close l oop String messageToSend = null; try { out.writeObject(userName + nextNum++ + ". " + userMessage.getText(» ; out .writeObject(checkboxState); catch(Exception ex) { System.out.println("Sorry dude. Could not send it to the server.") ; } userMessage.setText(" "); } I I c l ose action Pe r f o r med II c l os e i nne r c l a s s public class MyListSelectionListener implements ListSelectionListener public void valueChanged(ListSelectionEvent le) { if (!le .getValueIsAdjusting(» { String selected = (String) incomingList .getselectedValue(); if (selected != null) ( II now go to the map, and change the sequence boolean[] selectedState = (boolean[]) otherSeqsMap.get(selected); changeSequence(selectedState); sequencer.stop(); buildTrackAndStart(); } II close valueCha nged I I c l os e i nne r class 654 append ix A appendix A Final Code Kitchen public class RemoteReader implements Runnable { boolean[] checkboxState = null; String nameToShow = null ; Object obj = null; public void rune) { try { while«obj=in.readObject(» != nul l ) { System.out.println("got an object from System.out.println(obj .getClass(»; String nameToShow = (String) obj ; checkboxState = (boolean[]) in.readObject(); otherSeqsMap.put(nameToShow, checkboxState) ; listVector .add(nameToShow) ; incomingList.setListData(listVector) ; W~~II a ~esS4 (deseriaJi~) {h~ ~~ssage alld.L e t:ob ill, lIIe read 1110 c: ie t L ( t hetkb. ,;he Arra rs the the JL~~ stau vallAesf~ Ids+, 0+ Boo/eall . J C to~p I'l tidd ·.L J } I I close wh i l e IS a 1;111 U Ohe n,;..L Add' I,; 1;0 11 catch(Exception ex) {ex.printStackTrace();} 0+ the!:;ts p thil'l.9: 'lOlA k 1 9 to a JList I I close run fashioned A data !Vetto :~p a Vettor II close inner c lass JList to rrayLisi) a ; an oJd_ .for wh t lAse thai. Ve~k hen t~1J t h public class MyPlayMineListener implements ActionListener { a to display' .L as it's SOIA e public void actionPerformed(ActionEvent a) { III ,;h~ lisi.. rt~ 't. i f (mySequence != null) { sequence = mySequence ; II restore to my original } } II cl ose ac tionPerformed II close inner class This Ift~thod is tailed h t sOlftethin~ .ft-Olft th I' ~ he lASer seletts checkboxSta~,n~ the patter: ;;.fh e /NI'j1hf;[)/ATE.LY i public void changeSequence(boolean[] for (int i = 0; i < 256; i++) { JCheckBox check = (JCheckBox) checkboxList.get(i) ; i f (checkboxState[i]) { check .setSelected(true) ; else { check .setSelected(false) ; } II c l ose l oop I I close change Sequence e one .. ey selet ted. All the MIDI sh H is ~uttl~ the SaMe as it was in the yYe'JiolAS 'Jet-Slon. public void makeTracks(ArrayList list) Iterator it = list . iterator(); for (int i = 0; i < 16 ; i++) { Integer num = (Integer) it.next(); if (num != null) { int numKey = num.intValue() ; track.add(makeEvent(144 ,9 ,numKey, 100, i» ; track .add(makeEvent(128,9 ,numKey,100, i + 1»; } } I I clo se loop I I close makeTrac ks() you are he re. ._ - -- - - - - - - - 655 final BeatBox code public MidiEvent makeEvent(int comd, int chan, int one, int two, int tick) { MidiEvent event = null; try { ShortMessage a = new ShortMessage(); a. setMessage (comd, chan, one, two); event = new MidiEvent(a, tick); }catch (Exception e} { } \i~e the last ~e'f"si()'l\路 return event; Not\\'''~ "eVl路 J~t } II close rnakeEvent } II c l os e c l a s s ~yourpenCii What are some of the ways you can improve this program? Here are a few ideas to get you started : 1) Once you select a pattern, whatever current pattern was playing is blown away. If that was a new pattern you were working on (or a modification of another one), you're out of luck.You might want to pop up a dialog box that asksthe user if he'd like to save the current pattern. 2) If you fail to type in a command-line argument, you just get an exception when you run it! Put something in the main method that checks to see if you've passed in a command-line argument. If the user doesn't supply one, either pick a default or print out a message that saysthey need to run it again, but this time with an argument for their screen name. 3) It might be nice to have a feature where you can click a button and it will generate a random pattern for you.You might hit on one you really like. Better yet, have another feature that lets you load in existing 'foundation' patterns, like one for jazz, rock, reggae, etc. that the user can add to. You can find existing patterns on the Head First Java web start. 656 appen dix A appendix A Final Code Kitchen Final BeatBox server programlnputStream in; Socket clientSocket; public ClientHandler(Socket socket) ( try { clientSocket = socket; in = new ObjectlnputStream(clientSocket .getlnputStream(Âť; catch(Exception ex) (ex.printStackTrace();} } II clos e const ruc t or public void run {} { Object 02 = null ; Object 01 = null; try { while ((01 = in.readObject(Âť 02 != null) ( = in .readObject(); System .out .println("read two objects") ; tellEveryone(01, 02); I I close wh ile catch(Exception ex) (ex.printStackTrace();} I I cl os e run II c lo s e inn e r c lass you are here ~ 657 final BeatBox code public void qo() { clientOutputStreams = new ArrayList<ObjectOutputStream>(); try { ServerSocket serverSock = new ServerSocket(4242); while (true) { Socket clientSocket = serverSock.accept(); ObjectOutputStream out = new ObjectOutputStream(clientSocket.qetOutputStream(» clientOutputStreams.add(out); Thread t = new Thread(new ClientHandler(clientSocket»; t. start() ; System.out.println("qot a connection"); } }catch(Exception ex) { ex.printStackTrace(); } II c lose go public void tellEveryone(Object one , Object two) Iterator it = clientOutputStreams.iterator(); while(it .hasNext(» { try { ObjectOutputStream out = (ObjectOutputStream) it .next(); out.writeObject(one) ; out.writeObject(two) ; }catch(Exception ex) {ex.printStackTrace() ;} II close t e l l Everyone } I I close class 658 appendix A Appendix B The Top Ten Topics that almost made it into the Real Book... We covered a lot of ground,and you're almost finished with this book. We'll miss you, but before we let you go, we wouldn't feel right about sending you out Into JavaLand without a little more preparation. We can't possibly fit everything you'll need to know Into this relatively small appendix. Actually, we did originally Include everything you need to know about Java (not already covered by the other chapters), by reducing the type point size to .00003. It all fit, but nobody could read It. So, we threw most of it away,but kept the best bits for this Top Ten appendix. This really Is the end of the book. Except for the Index (a must-read I). this is a new appendix 659 bit manipulati on #10 Bit Manipulation Why do you care? The Shift Operators We've talked about the fact that there are 8 bits in a byte, 16 bits in a short, and so on. You might have occasion to turn individual bits on or off. For instance you might find yourself writing code for your newJava enabled toaster, and realize that due to severe memory limitations, certain toaster settings are controlled at the bit level. For easier reading, we're showing only the last 8 bits in the comments rather than the full 32 for an int). These operators take a single integer primitive and shift (or slide) all of its bits in one direction or another. If you want to dust off your binary math skills, you might realize that shifting bits left effectively multiplies a number by a power of two, and shifting bits right effectively divides a number by a power of two. We'll use the following example for the next three operators: int x = -11; Bitwise NOT Operator: ... II II x = -x; bits are 00001010 bits are now 11110101 The next three operators compare two primitives on a bit by bit basis, and return a result based on comparing these bits. We'll use the following example for the next three operators: int x = 10; int y = 6; II bits are 00001010 II bits are 00000110 Bitwise AND Operator: Right Shift Operator: II » This operator shifts all of a number's bits right by a certain number, and fills all of the bits on the left side with whatever the original leftmost bit was. The sign bit does not change: & int y This operator returns a value whose bits are turned on only if both original bits are turned on: int a = x «y; bits are 11110101 Ok, ok, we've been putting it off, here is the world's shortest explanation of storing negative numbers, and two's complement. Remember, the leftmost bit of an integer number is called the sign bit. A negative integer number in Java always has its sign bit turned on (i.e. set to 1). A positive integer number always has its sign bit turned 011(0). Java uses the two's complement formula to store negative numbers. To change a number's sign using two's complement, flip all the bits, then add 1 (with a byte, for example, that would mean adding 00000001 to the flipped value). This operator 'flips all the bits' of a primitive. int x = 10; II =x »2; II bits are 11111101 Unsigned Right Shift Operator: »> Just like the right shift operator BUT it ALWAYS fills the leftmost bits with zeros. The sign bit mightchange: bits are 00000010 Bitwise OR Operator: This operator returns a value whose bits are turned on only if either of the original bits are turned on: int a = x I y; II A This operator returns a value whose bits are turned on only if exactly oneof the original bits are turned on: int a = x 660 A y; appendix B II bits are 00001100 II bits are 00111101 Left Shift Operator: bits are 00001110 Bitwise XOR (exclusive OR) Operator: int y = x»> 2; « Just like the unsigned right shift operator, but in the other direction; the rightmost bits are filled with zeros. The sign bit mightchange. int y = x «2; II bits are 11010100 appendix B Top Ten Reference #9 Immutability Why do you eare that Strlt1i!..are 1"''Mutable? When yourJava programs start to get big. you 'll inevitably end up with lots and lots of String objects. For security purposes, and for the sake of conserving memory (remember yourJava programs can run on teeny Java-enabled cell phones), Strings in java are immutable. What this means is that when you say: String s = "0"; for (int x :: 1; x < 10; x++) ( 8 = S + Xi ) What's actually happening is that you're creating ten String objects (with values "0", "01", "012 ", through "0123456789") . In the end s is referring to the String with the value "0123456789", but at this point there are ten Strings in existence I Whenever you make a new String, theJVM puts it into a special part of memory called the ' String Pool' (sounds refreshing doesn't it.?). If there is already a String in the String Pool with the same value, the JVM doesn't create a duplicate, it simply refers your reference variable to the existing entry. TheJVM can get away with this because Strings are immutable; one reference variable can't change a String's value out from under another reference variable referring to the same String. The other issue with the String pool is that the Garbage Collector doesn't go there. So in our example. unless by coincidence you later happen to make a String called "01234", for instance, the first nine Strings created in our forloop willjust sit around wasting memory. Why do you care that Wra2!.ers are It"",utable? In the Math chapter we talked about the two main uses of the wrapper classes: • Wrapping a primitive so it can pretend to be an object. • Using the static utility methods (for example, Integer. parseln t ()). It's important to remember that when you create a wrapper object like: Integer iWrap :: new Integer(42); That's it for that wrapper object. Its value will always be 42. There is no settermethodfor a wrapper object. You can , of course, refer iWrap to a different wrapper object, but then you'll have two objects. Once you create a wrapper object, there's no way to change the value of that objectl Roses orered. . Strln ' Violets oreblue .11S ar.'mmutobf, . e, 'NfDppe,s are too. How does this save memory? Well, if you're not careful, it dcesn 't! But if you understand how String immutability works, than you can sometimes take advantage of it to save memory. If you have to do a lot of String manipulations (like concatenations, etc .), however, there is another class StringBuilder, better suited for that purpose. We'll talk more about StringBuilder in a few pages. you are here ~ 661 assertions #8 Assertions We haven't talked much about how to debug yourJava program while you 're developing it. We believe that you should learn java at the command line, as we've been doing throughout the book. Once you're aJava pro, if you decide to use an IDE*, you might have other debugging tools to use . In the old days, when a java programmer wanted to debug her code, she'd stick a bunch of System .out.println ( ) statements throughout the program, printing current variable values, and "I got here" messages, to see if the flow control was working properly. (The ready-bake code in chapter 6 left some debugging 'print' statements in the code.) Then, once the program was working correctly, she'd go through and take aU those System . out.println( ) statements back out again . It was tedious and error prone. But as ofJava 1.4 (and 5.0), debugging got a whole lot easier. The answer? Assertions Assertions are like Systern .out.println ( ) statements on steroids. Add them to your code as you would add println statements. The java 5.0 compiler assumes you 'll be compiling source files that are 5.0 compatible, so as ofJava 5.0, compiling with assertions is enabled by default. At runtime, if you do nothing, the assert statements you added to your code will be ignored by the JVM, and won 't slow down your program. But if you tell the JVM to enable your assertions, they will help you do your debugging, without changing a line of code I Some folks have complained about having to leave assert statements in their production code, but leaving them in can be really valuable when your code is already deployed in the field. !fyour client is having trouble, you can instruct the client to run the program with assertions enabled, and have the client send you the output. If the assertions were stripped out of your deployed code, you'd never have that option. And there is almost no downside; when assertions are not enabled, they are completely ignored by thejVM, so there's no performance hit to worry about. 662 appsnuix B How to make Assertions work Add assertion statements to your code wherever you believe that something must be true. For instance : assert (height> 0); II if true, program continues normally II if false, throw an AssertionError You can add a little more information to the stack trace by saying: assert (height> 0) : "height = " + height + " weight = " + weight; The expression after the colon can be any legal Java exp ression that resolves to a non-null value. But whatever you do . don't create assertions that cJumge an objed's stale! If you do, enabling assertions at runtime might change how your program performs. Compiling and running with Assertions To compilewilh assertions: javac TestDriveGame.java (Notice that no command line options were necessary.) To run with assertions: java -ea TestDriveGame '" IDE SlaDW for Integrated Development Environment and includes tools such as Eclipse, Borland's jbuilder, or the open source NetBeans (netbeans.org). appendix B Top Ten Reference #7 Block Scope In chapter 9, we talked about how local variables live only as long as the method in which they're declared stays on the stack. But some variables can have even shorter lifespans. Inside of methods, we often create blocks of code. We've been doing this all along, but we haven't explicitly talkedin terms of blocks. Typically, blocks of code occur within methods, and are bounded by curly braces {}. Some common examples of code blocks that you'll recognize include loops (jar, while) and conditional expressions (like if statements) . +,nod b\ot.\I. s-tarl. ~ -tne ,.,e Let's look at an example: void doStuff () , = Lnt; x 0; {~ J /_ lotal vayiable stofe" ~ ~ = x + y; ~ ..., H- e\'lQ o.f the tor 1'0 LI fYOD e"', stofed t.o y.. ~ r 1 f blotk a"d '1 is I " 0\'11'1 the 0\" oof· { ~be~i\'l\'li\'l~' a for (int y = 0; y < 5; y++) x t.o the e"tiYe ",ethod OY a"d \I aye both i\'l st°fe I loop blotk In the previous example, y was a block variable, declared inside a block, and y went out of scope as soon as the for loop ended. Your java programs will be more debuggable and expandable if you use local variables instead of instance variables, and block variables instead of local variables, whenever possible. The compiler will make sure that you don't try to use a variable that's gone out of scope, so you don 't have to worry about runtime meltdowns. you are here • 663 linked invocations #6 Linked Invocations While you did see a little of this in this book, we tried to keep our syntax as clean and readable as possible. There are, however, many legal shortcuts inJava, that you'll no doubt be exposed to, especially if you have to read a lot code you didn't write. One of the more common constructs you will encounter is known as linked invocations. For example: StringBuffer sb = new StringBuffer("spring"); sb = sb.delete(3,6) .insert(2,"umme") . d e l e t e Ch a r At (l ) ; System.out.println("sb = " + sb); II result is sb = summer What in the world is happening in the second line of code? Admittedly, this is a contrived example, but you need to learn how to decipher these. 1 - Work from left to right. 2 - Find the result of the leftmost method call , in this case sb. delete (3, 6) . If you look up StringBuffer in the API docs, you'll see that the delete () method returns a StringBuffer object. The result of running the delete () method is a StringBuffer object with the value "spr". 3 - The next leftmost method (insert () )is called on the newly created StringBuffer object "spr". The result of that method call (the insert () method), is also a StringBuffer object (although it doesn 't have to be the same type as the previous method return), and so it goes, the returned object is used to call the next method to the right. In theory, you can link as many methods as you want in a single statement (although it 's rare to see more than three linked methods in a single statement). Without linking, the second line of code from above would be more readable, and look something like this: sb sb sb sb.delete(3,6); sb.insert(2,"umme"); sb.deleteCharAt(l); But here's a more common, and useful example, that you saw us using, but we thought we'd point it out again here. This is for when your main 0 method needs to invoke an instance method of the main class, but yo u don't need to keep a reference to the instance of the class. In other words, the main 0 needs to create the instance only so that main 0 can invoke one of the instance's methods. c lass Faa { public static void main (String [] args) [ we do,,' t. ea-e ab~t. new Faa () . go (); ~ we 'fJa"t. +.0 t.all ~o 't. bot.heY assi~l'Iil'l~ L r.: il'ls-tal'lt.e, so we l.\Ol'l I' "t,he rOO r.: ~,. t.t. +.0 a ye-teyel'lt.e路 void go () t.he l'Iew rOO 0 ~e L L () ,0,",,,.1 I I here's what we REALLY want. .. } 664 append ix B appendix B Top Ten Reference #5 Anonymous and Static Nested Classes Nested classes come In many flavors In the GUl event-handling section of the book, we started using inner (nested) classes as a solution for implementing listener interfaces. That's the most common, practical, and readable form of an inner class-where the class is simply nested within the curly braces of another enclosing class. And remember, it means you need an instance of the outer class in order to get an instance of the inner class, because the inner class is a member of the outer/enclosing class. But there are other kinds of inner classes including static and arwnymous. We're not going into the details here, but we don't want you to be thrown by strange syntax when you see it in sorneone's code. Because out of virtually anything you can do with the Java language, perhaps nothing produces more bizarre-looking code than anonymous inner classes. But we'll start with something simpler--static nested classes. Static nested classes You already know what static means-s-something tied to the class, not a particular instance. A static nested class looksjust like the non-static classes we used for event listeners, except they 're marked with the keyword static. public class FooOuter ( class Barlnner void saylt() { System. out . printin ("method of a static inner class"); ) } class Test ( publ ic s t a tic void faa. sayIt () ; Static nested classes are more like reguJar non-nested classes in that they don't enjoy a special relationship with an enclosing outer object. But because static nested classes are still considered a member oi the enclosing/outer class, they still get access to any private members of the outer class... but only the onesthat arealso statu. Since the static nested class isn't connected to an instance of the outer class, it doesn't have any special way to access the non-static (instance) variables and methods. you are here ~ 665 when arrays aren't enough #5 Anonymous and Static Nested Classes, continued The diHerenee between nested and Inner AnyJava class that's defined within the scope of another class is known as a rIÂŁSU:d class. It doesn't matter if it's anonymous, static. normal, whatever. If it's inside another class, it's technically considered a nested class. But non-statu nested classes are often referred to as inner classes, which is what we called them earlier in the book. The bottom line : all inner classes are nested classes, but not all nested classes are inner classes. Anonymous inner classes Imagine you 're writing some GUI code, and suddenly realize that you need an instance of a class that implements ActionListener. But you realize you don't have an instance of an Actionl.istener, Then you realize that you also never wrote a class for that listener. You have two choices at that point: 1) Write an inner class in your code, the way we did in our GUI code, and then instantiate it and pass that instance into the button's event registration (addActionListenerO) method. OR 2) Create an anonymous inner class and instantiate it, right there.just-in-time. Litera11y right where YlJU are at the pmnt you need the listener or,jed. That's right, you create the class and the instance in .the place where you'd normally be supplying just the instance. Think about that for a moment-it means you pass the entire cla.sswhere you'd normally pass only an instance into a method argumentl } 666 append ix 8 access levels #4 appendix B Top Ten Reference Access Levels and Access Modifiers (Who Sees What) Java has four access levels and three access modifiers. There are only three modifiers because the default (what you get when you don't use any access modifier) is one of the four access levels. Access Levels (in order of how restrictive they are, from least to most restrictive) public ~ rlAblit ....eans any cede anywhere tan aUess the flAblit thin~ (by thin~' we ....ean tlass, I/ariable, ....ethod, tonstrlAttor, eUJ. protected "''-:- - - froutud works jlAst like detalAlt (tode in the sa....e fatka~e has auess), EXCEPT it also allows slAbtiasses olAtside the default ~ private ~ fatka~e to inherit the yrotetud thin~. detalAlt aUess ....eans that only cede within the sa....e fatka~e as the tlass with the detalAlt thin~ tal'l aUess the detalAlt thin~. fril/at e ....eans that ol'lly tode withil'l the same dass tal'l aUess the yril/ate thin~. Keef il'l mil'ld it means fril/au to the tlass, not fril/ate to the objett. One D~ tal'l see al'lother D~ objett's fril/au shU, blAt a Cat tan't see a D~'s yril/aus. Access modifiers public protected private Most of the time you'll use only public and private access levels. public Use public for classes, constants (static final variables) , and methods that you're exposing to other code (for example getters and setters) and most constructors. private Use private for virtually all instance variables, and for methods that you don't want outside code to call (in other words, methods used by the public methods of your class). But although you might not use the other two (protected and default), you still need to know what they do because you'll see them in other code. you are here ~ 667 when arrays aren't enough #4 Access Levels and Access Modifiers, cont. default and protected default Both protected and default access levels are tied to packages. Default access is simple-it means that only code within the samepackage can access code with default access. So a default class, for example (which means a class that isn't explicitly declared as puhlil:) can be accessed by only classes within the same package as the default class. But what does it really mean to MUSS a class? Code that does not have access to a class is not allowed to even think about the class. And by think, we mean use the class in code. For example, if you don't have access to a class, because of access restriction, you aren't allowed to instantiate the class or even declare it as a type for a variable, argument, or return value. You simply can't type it into your code at all! !fyou do, the compiler will complain . Think about the implications-a default class with public methods means the public methods aren't really public at all. You can't access a method if you can't see the class. Why would anyone want to restrict access to code within the same package? Typically, packages are designed as a group of classes that work together as a related set. So it might make sense that classes within the same package need to access one another's code, while as a package, only a small number of classes and methods are exposed to the outside world (i.e. code outside that package). OK, that's default. It's simple-if something has default access (which, remember, means no explicit access rnodifierl), only code within the same package as the default thing (class, variable, method, inner class) can access that thing. Then what's proucudfor? protected Protected access is almost identical to default access, with one exception: it allows subclasses to inherit the protected thing, even iJthose subclasses are ()UtsUU the pad1.age of the super-class they extend: That's it. That's all protected buys you-the ability to let your subclasses be outside your superclass package, yet still inherit pieces of the class, including methods and constructors. Many developers find very little reason to use protected, but it is used in some designs, and some day you might find it to be exactly what you need. One of the interesting things about protected is that-unlike the other access levels-protected access applies only to inheritance. If a subclass-outside-the-package has a nference to an instance of the superc1ass (the superclass that has, say, a protected method), the subclass can 't access the protected method using that superclass referencel The only way the subclass can access that method is by inheritingit. In other words, the subclass-outside-the-package doesn't have access to the protected method. it just has the method, through inheritance. 668 appendix B appendix B Top Ten Reference String and StringBuffer #3 String and StringBufferlStringBuilder Methods Two of the most commonly used classes in th e Java API are String and StringBuffer (rem em ber from #9 a few pages back, Strings are immutable, so a StringBuffer/StringBuilder can be a lot mor efficient if you're manipulating a String). As ofJ ava 5.0 you should use the StringBuilderclass in stead of StringBuffer, unless your String manipulations need to be thread-safe, which is not co m m on . Here's a brief overview of the key methods in these classes: Both String and StringBuffer/StringBuilder classes have: char charAt(int index) ; / / what char is at a ce rtain position int Iength t): / / how long is th is String substring(int start, int end ); / / get a part of this String to.String () : / / what's the String value of th is To concatenate Strings: String co n cat (stri n g); / / for the String class String appen d (Stri ng); / / for StringBuffer & StringBuilder The String class has: String replace (char old, char new); / / replace all occurences of a char String substring(int begin, int end); / / get a portion of a String char [] toCharArrayO ; / / convert to an array of chars String toLowerCase () ; / / convert all characters to lower case String toUpperCaseO; / / co nvert all characters to upper case String trim () ; / / remove whitespace from the ends String valueOf( char []) / / make a String out of a char array String valu eOf(int i) / / make a String out of a primitive / / other primitives are supported as well The StringBuffer & StringBuilder classes have: StringBxxxx delete (int start, int end); / / delete a portion StringBxxxx insert(int offset, any primitive or a char [] ); / / insert something StringBxxxx replace (int start, int end, String s); / / replace this part with this String Stringlsxxxx reverser): / / reverse the SB from front to back void setCharAt(int index, char ch); / / replace a given character Note: StringBxxxx refers to either StringBufferor StringBuilder, as appropriate. you a re here. 669 when arrays aren't enough #2 Multidimensional Arrays In most languages, if you create, say, a 4 x 2 two-dimensional array, you would visualize a rectangle, 4 elements by 2 elements, with a total of 8 elements. But inJava, such an array would actually be 5 arrays linked together! In java, a two dimensional array is simply an arra)' of arrays. (A three dimensional array is an array of arrays of arrays, but we'll leave that for you to play with.) Here's how it works i.nt ]] (] a2d = new int [4] (2]; The JVM creates an array with 4 elements. Each. of these four elements is actually a reference variable referring to a (newly created), int array with 2 elements. int[J(] int array object (int[][D Working with multidimensional arrays - To access the second element in the third array: int x = a2d [2) [1) i I I remember, 0 based! - To make a one-dimensional reference to one of the sub-arrays: int [J copy = a2d [1 J; -Short-cut initialization ofa2x3array: intlJ I) x = ( l 2,3,4 }, I 7,8,9 } }; - To make a 2d array with irregular dimensions: int(J [J y = new int (2) [); II makes only the first array, with a length of 2 y[Ol new int (3J; II makes the first sub-array 3 elements in length y(l] = new int 15J; II makes the second sub-array 5 elements in length 670 appendix B enumerations appendix B Top Ten Reference And the number one topic that didn't quite make it in... #1 Enumerations (also called Enumerated Types or Enums) We've talked about constants that are defined in the API, for instance, JFrame.EXIT_ON_CLOSE. You can also create your own constants by marking a variable static final. But sometimes you'll want to create a set of constant values to represent the only valid values for a variable. This set of valid values is commonly referred to as an enumeration. Before Java 5.0 you could only do a half-baked job of creating an enumeration in Java. As ofJava 5.0 you can create full fledged enumerations that will be the envy of all your pre:Java 5.0-using friends. Who's in the band? Let's say that you 're creating a website for your favorite band, and you want to make sure that all of the comments are directed to a particular band member. The old way to fake an "enum": public static final int JERRY = 1; public static final int BOBBY = 2 ; public static final int PHIL = 3; I I later in the code ~ if (sele ctedBandMember == JERRY) { II do JERRY related stuff We're h . Lh L b Lh L " ;; a;; y;; e ;;'l'WIe we ~ot here s " det te dBalldMel'Wlber" has a valid val"t.! 째flll~ The good news about this technique is that it DOES make the code easier to read. The other good news is that you can't ever change the value of the fake enums you've created; JERRY will always be 1. The bad news is that there's no easy or good way to make sure that the value of selectedBandMember will always be 1, 2, or 3. If some hard to find piece of code sets selectedBandMember equal to 812, it's pretty likely your code will break.. . you are he re ~ 671 when arrays aren't enough #1 Enumerations, cont. The same situation using a genuine Java 5.0 enum. While this is a very basic enumeration, most enumerations usually are this simple. A new, official "enum": ~ public enum Members { JERRY, BOBBY, PHIL }; public Your enum extends java.lang.Enum When you create an enum, you're creating a new class, and you're implicitly extending java . lang . Enum. You can declare an enum as its own standalone class, in its own source file, or as a member of another class. Using "if" and "switch" with Enums Using the enum we just created, we can perform branches in our code using either the if or swi tch statement. Also notice that we can compare enum instances using either == or the . equa ls () method. Usually == is considered better style. ______ Assi~r.ir.~ ar. er.1mI yal~ to a variable, Members n = Members.BOBBY; ~ if (n .equals(Members.JERRY)) System. out.println("Jerrrry!"); if (n == Members.BOBBY) System.out.println("Rat Dog"); I. "--- ~o-\:)l J t"'e$e '" ""$ Members ifName = Members.PHIL; switch (ifName) { case JERRY: System.ou t.print("make it sing "); case PHIL: System. o ut.print("go deep ") ; case BOBBY: System.out.println("Cassidy! "); 672 appendix 8 r, el 0'(" """ , "Rat D~ ,"",,-\:,ed, enumerations #1 appendix B Top Ten Reference Enumerations, completed A really tricked-out version of a similar enum You can add a bunch of things to your enum like a constructor, methods. variables, and something called a constant-specific class body. They're not common, but you might run into them: public class HfjEnum ( L sstO il'\ -\:p ay~"",el'\1:. ~a L.\ ________ nils is a~\Il.~ oeda.... td ~ o'Wl· enum Names ( ~__ L.' • _ ~~_ . ~e ~Q'I\S JERRi' ("lead guitar) { public String sings () return "plaintively";} } th $O-t.llleo { ~ Tkese a....e e ok' dau bodies". ; / "~Ylt.-s?etl It o . -th ~ t.he'" as O"t:rt"IO.ln~ e LLod (iYl -thIS t,4St \)asit eYl......... n.n r oO . the "S'IYI~O)l ""et.hcxl), iot S,,\~ IS t.llleo 0" a lIayid'ole ...ith aYl en-lIaIv.t o-tr J~RRY ~ BOBBY· Think J o BOBBY ("rhythm guitar") ( public String s i nqs () I return "hoarsely"; ) ), PHIL ("bass") ; private String instrument; ~~-- This is Names(String instrument) ( this. instrument = instrument; OWIU -the tr>lAM'S +t*' ) public String getlnstrument{) return this. instrument; toYlShvttm-. It. ruYlS e6th detla.. . ed eYllAM I/al~ t.his ~ it. . . "'W\.\ t.hree tiMes)o (iYl ) public String sings () ( return "occasionally"; public static void main(String [] args) (0 ___________ for (Names n : Names.values(») ( ~ System.aut.print<n); System.out.prin~(", instrument: "+ n.getlnstrument<)): System.aut.println(", sings: " + n.sings()); %java HfjEnum Not.itt that the ba.sit Usi,,~()) ""et.hod is oYlly lJlled wheYl the JERRY, i n s t r ume n t : lead guitar, sings: plaintively BOBBY , instrument : rhythm guitar, sings : hoarsely PHIL, instrument: bass , sings: occasionally e"w.- I/al~ has 110 tOPl.Sta7lt- % speti.fit tlas.s body. you are here . 673 when arrays aren't enough A Long Trip Home Captain Byte of the Flatland starship "Traverser" had received an urgent, Top Secret transmission from headquarters. The message contained 30 heavily encrypted navigational codes that the Traverser would need to successfully plot a course home through enemy sectors. The enemy Hackarians, from a neighboring galaxy, had devised a devilish code-scrambling ray that was capable of creating bogus objects on the heap of the Traverser's only navigational computer. In addition, the alien ray could alter valid reference variables so that they referred to these bogus objects . The only defense the Traverser crew had against this evil Hackarian ray was to run an inline virus checker which could be imbedded into the Traverser's state of the art Java 1.4 code. Captain Byte gave Ensign Smith the following programming instructions to process the critical navigational codes: "Put the first five codes in an array of type ParsecKey. Put the last 25 codes in a five by fi ve, two dimensional array of type QuadmntKey. Pass these two arrays into the plolCourseO method of the public final class Shipblavigation. Once the course object is returned run the inline virus checker against all the programs reference variables and then run the NavSim program and bring me the results." A few minutes later Ensign Smith returned with the NavSim output. "NavSim output ready for review, sir", declared Ensign Smith. "Fine", replied the Captain. "Please review your work". "Yes sir!", responded the Ensign, "First I declared and constructed an array of type ParsecKey with the following code ; ParsecKey 0 p = new ParsecKey[5]; , next 1 declared and constructed an array of type QuadrantKey with the following code: QuadrantKey 0 0 q = new QuadrantKey [5] [5]; . Next, I loaded the first 5 codes into the ParsecKey array using a 'for' loop, and then I loaded the last 25 codes into the Quadrantkey,array using nested 'for'{oops, Next, I ran the virus checker against all 32 reference variables, I for the ParsecKey array, and 5 for its elements, I for the QuadrantKey array. and 25 for its elements. Once the virus check returned with no viruses detected, I ran the NavSim program and re-ran the virus checker, just to be safe... Sir!" Captain Byte gave the Ensign a cool , long stare and said calmly, "Ensign, you are confined to quarters for endangering the safety oftbis ship, I don't want to see your face on this bridge again until you have properly learned your Java! Lieutenant Boolean, take over for the Ensign and do this job correctly!" Why did the captain confine the Ensign to his quarters? 674 appendix B A Long Trip Home Captain Byte knew that in Java, multidimensional arrays are actually arrays of arrays. The five by five QuadrantKey array ' g' , would actually need a total of31 reference variables to be able to access all of its components: I - reference variable for 'q' 5 - reference variables for q [0) - q [4J 25 - reference variables for q [0] [0] - q [4] [4] The ensign had forgotten the reference variables for the five one dimensional arrays embedded in the 'q' array. Any of those five reference variables could have been corrupted by the Hackarian ray, and the ensign's test would never reveal the problem . you are here ~ 675 This isn't goodbye Bring your brain over to wickedlysmart.com
|
http://issuu.com/cs-groupc/docs/head-first-java-2nd-edition_split_2
|
CC-MAIN-2015-35
|
refinedweb
| 22,970
| 65.62
|
11 July 2012 15:57 [Source: ICIS news]
LONDON (ICIS)--Crude oil initially rose about 35 cents/bbl on Wednesday, adding to earlier gains, when this week’s ?xml:namespace>
August NYMEX light sweet crude futures rose from around $85.45/bbl before the figures from the Energy Information Administration (EIA) were published to hit a high of $85.79/bbl, a gain of $1.88/bbl from Tuesday’s close
However, they then lost ground when it was revealed that most of the drop in inventories was on the West Coast and at 14:40 GMT, August NYMEX crude was trading around $85.10/bbl, up $1.19/bbl from Tuesday’s close of $83.91/bbl.
On ICE Futures, August Brent also gained ground, rising from around $99.50/bbl before the figures were published to hit a high of $99.67/bbl. At 14:40 GMT, August Brent was trading around $99.10/bbl, up $1.13/bbl from Tuesday’s close of $97.97/bbl.
Analysts’ predicted this week’s US stock figures would show a draw on crude stocks of about 1.2m bbl, a build on distillate of around 400,000
|
http://www.icis.com/Articles/2012/07/11/9577505/crude-makes-short-lived-gain-on-us-crude-stock-draw.html
|
CC-MAIN-2014-41
|
refinedweb
| 197
| 76.62
|
Goldilocks and SML
December 15, 1999."
Not everyone is Goldilocks. Here is a prediction of mine from XML-DEV of November 1997 (Simon uses bozo here ironically):
> From: Simon St. Laurent <SimonStL@classic.msn.com>
> ... isn't everything an element? Wouldn't that be simpler?" "Why can't we leave out these strings since they are not needed for parsing?" and so on.
Just because questions about simplifying XML are predictable and repetitive does not mean they are trivial, or that they are asked idly. In a fit of eye-rolling about the same issues coming up again and again, in November 1999 I posted a little hoax on XML-DEV announcing XML 2-0alpha:
I am happy to announce the release of the XML 2.0alpha specification. It has been created by using all the criticisms of XML over the last year.
A non-exhaustive list of its main features compared to XML 1.0 would include the following:
* It only uses UTF-8 (e.g., some W3C people) and so gets rid of the
encoding header, numeric character references.
* It gets rid of PIs (e.g., TimB-L).
* It gets rid of parameter entities (e.g., some schema people).
* It gets rid of DTDs (e.g., many of the same people).
* It gets rid of notations (since no one knows what they are for still).
* It gets rid of entities (since XLink will replace them).
* It gets rid of the ' as a literal delimiter.
* It gets rid of attributes (since all we need are elements).
* It gets rid of IDs (since local names are old hat).
* It gets rid of PUBLIC identifiers (since everything should be a URI).
* It gets rid of CDATA sections (since they make grepping unsafe).
* It gets rid of using the name of an element to key its type (i.e., you
have to use a namespace URI and the munged name).
* It gets rid of elements and embedded markup (e.g., Ted Nelson).
* It gets rid of chunks of Unicode through "early normalization" (e.g., the W3C I18n WG).
* Because the only delimiter is the comment delimiter, the need for
& and > is removed; because the string <!-- can be represented as
<!<@-- -->-- there is no need for < either.
This gives us quite a nice markup language: XML 2.0alpha which consists of only
* data
I propose that we should all spend the next 100 years discussing this,
and that every W3C specification in the meantime should try to
influence the outcome by supporting only the subset of XML 1.0 they
like, until consensus is reached by people outside the original
developers of XML.
Indeed, wise implementors should delay until XML 3.0. There is talk
that allowing all these characters poses internationalization problems,
so it is possible that only ASCII characters may be used in the future.
Because of WAI reason, it may be that only visually distinct characters may be allowed: so XML 3.0 will only consist of one or more occurrences of the letters O and X. This will provide substantial benefits for compression and binary representation, as well as direct representation of certain games.
A few hours later, across the international data line, Don Park hit back—floating a proposal for just what I was knocking, calling it SML. In the future, I should just shut up! But no: the questions that the SMLers are asking are good. XML has been made with a defensible set of choices of language features (which is not to say that they are all the choices that I would have made); debate can clarify why the choices were made and what the alternatives are. A technology is engineered to the perceived tradeoffs of the times, and times change..
Homogeneous or Curdled?
Some people say that we can get simplification by layering. Let us put aside the obvious answer that cutting the cake in a different way does not reduce its size. Instead, is it possible to devise a highly layered XML?
Here is a model of XML in 16 skinny layers. The XML 1.0 specification does not present itself using these layers, and I think it would be crazy to implement a system using them. But layering is possible: let us assume stream processing:
- Storage/Transportation: this layer is concerned with bits and bytes: HTTP and MIME for example, or file access.
- Compression: this layer is concerned with size reduction.
- Encoding handling: this layer handles the XML encoding header, converting the data to Unicode.
- Normalization: this layer (not part of current systems) copes with Unicode's inherited internal infelicities.
- Parameter entity handler: this layer reads parameter entities and expands them, like a partial cpp.
- DTD marked section handler: this layer ignores or includes marked sections, like a partial cpp.
- Entity expander: this layer reads general entities and expands them and numeric character references, like a partial cpp.
- Comment stripper: this layer strips comments, like a partial cpp.
- CDATA section handler: this layer converts CDATA sections to a form using
<,
&and
>.
- Server-side PI handler: some PIs we handle before parsing; they don't require context. Most server-side tags in server pages use some PI variant.
- Parser: we now have a language the size of SML—just elements, attributes, three built-in character references and PIs, but with various declarations; this layer constructs some programmatic representation of the data, perhaps a tree or a stream.
- Attribute defaulting: this layer adds default values.
- Namespace prefixing: this layer adds explicit namespaces to all elements.
- Validation: this stage checks the document against the types declared.
- PI handling: this stage processes the PIs.
- Element handling: this stage processes elements.
That gives a very rich environment. None of these layers needs to contain a full XML parser: for example, the CDATA handler only needs to look for "<![" and "]]>" and to skip "<?" and "?>". So if lack of layerability is not a problem, let us turn 180 degrees and say that XML has too many layers. How do we evaluate too many?
Reductionists think that everything that can be removed from XML should be removed. First to go would be the minor features of XML—CDATA, comments and PIs. But these result in almost no reduction in complexity. Then goes DTDs and entities, a biggy. Then goes internationalization. Then goes attributes, numeric character references, and long end-tags. The language that is left is simple, something like:
data ::= "<" name " " data* ">" | "<" | ">" | "&" | other-UTF-8 name ::= ASCII
This type of syntax is akin to s-expressions, familiar to and beloved by LISP programmers, and not dissimilar to Microsoft's RTF either. (However, it has consistently failed to be popular for writing.) At this stage, the softer reductionists will back off a bit, and the ones who want to keep compatibility with XML will win, due to the presence of infrastructure: long end-tags, numeric character references, and perhaps even attributes come back.
So within XML we have so many layers, and the W3C is developing more as its members think of new applications: linking, datatyping, transformation, formatting, scripting, packaging, etc. The problem with treating low-level layers as optional rather than required is that it allows vendors to pick and choose which layers to provide. What if one big browser company decided not to use namespaces, while another chose to include them? Or if one company decided that the comment layer was no good while another kept it? This would bring us all the joys of the HTML market, where you have to create different pages for each browser many times. Now perhaps we will always have to transform to different pages, but why should we have to transform to different syntaxes? That is not simplicity. SML must be justified by a respect for plurality rather than by the pursuit of simplicity.
There is always a tendency to confuse a technology with its specification. This was also the demon with SGML, where the difficulty of ISO 8879 was often blamed as being intrinsic to SGML. I hope it is not a general requirement of a language specification to present its subject in an "as-layered-as-possible" form.
Fuzzy?
Without taking too much time on this, there is a claim that because attributes and elements are not conceptually different, there should be no way to mark them up. The answer I would give is two-fold:
- First, it is a natural pattern in elements to have a head containing metadata and a body containing data; this is true all the way from the top-level elements, such as html, down to the lowest level.
- Second, elements often push function-invocation in programs, and functions typically pull attribute values: so there is clear pragmatic justification for a distinction being made. This distinction also applies to comments and processing instructions.
So there are general conceptual distinctions at play. Why should these be marked up using special delimiters rather than merely by reserving some special element names? There are several reasons, including readability, simplification of paths, simplification of content models, simplification of at least some kinds of programs, and so on. For example, if there are no attributes, imagine how to handle IDs and xml:lang. A small simplification in grammar but a corresponding complication in client layers.
But doesn't the element/attribute distinction complicate mapping data from databases which do not have this distinction? To go from a database to XML requires a mapping convention rather than a new syntax. Removing attributes from XML would remove one question from the list of issues such a system must resolve, but it does not remove them all: the flattening of data, the representation of relations, the order of records, the format of dates, the mapping of case-insensitive names, etc.
Sugary?
Even though I have used the term reductionist, perhaps some SMLers will prefer to see themselves as essentialists: stripping XML of its syntactic sugar down to its essential features. But the essential feature of XML is that it is a markup language. It is not merely a language for computer-to-computer exchange—it also provides minimal features aimed at making life easier for direct readers and writers of data.
A markup language is not a data-modeling language. To look at justifications for syntactic and deeper features in XML, one cannot rely on statements like "my data needs this" or "this makes sense in my data model." There is no intrinsic reason to make one atomic piece of information an element or attribute from a data modeling point of view. But from the view of markup, of how a human reads and knows, there is every reason.
So I suspect that many of the proponents of SML really are not interested in making a markup language at all: they want a data representation language, or a data packaging language, or a data transport language. The XML strategy is to build on top of a solid, human-friendly basis. This focus on human factors has allowed SGML-family markup languages to thrive while others wither.
If visual distinctiveness is a primary human-factors requirement for a markup language, the various syntax alternatives within XML make sense:
- the different delimiters for different kinds of tags
- the alternative delimiters for attribute values
- the provision of CDATA sections, which have delimiters that are very unlikely to be found in real data
Even the fact that we cannot use numeric character references in element names shows that the undesirability of unreadable names is considered greater than the desirability of universal transcodibility: it is a tradeoff only justifiable by considering XML as a markup language rather than merely as a data interchange language.
Full of Indigestible Foreign Muck?
XML is a technology that provides sufficient internationalization for a fair and usable global infrastructure.
My job for the last year has been to try to encourage Western developers not to ignore the rest of the world. It is not just me trying to tell everyone what they should do; I have been employed (in a desperation-move) by an Asian research institution who sees that without an internationalized WWW, the opportunities for economic development towards an information-economy are retarded. Backyard hacks are fun, but after they escape the backyard, they have different requirements.
Technology has an impact; if someone proposes a technology for the World Wide Web, it is fair game to have it examined for its predictable social effects. It is also fair game to have the values underlying it pointed out.
When internationalization is built into the infrastructure, the whole world becomes much simpler. For most non-Americans and people who work in other scripts than Western European alphabets, you can imagine that the "simpler" in SML would be rather ironic.
XML was designed in the hope that, by paying full attention to the current plurality of character encodings and by providing a reliable method to label these encodings, XML can act as a Trojan Horse to bring Unicode everywhere. It is naive to think that current and legacy encodings will go away without an off ramp. I suspect that this off ramp is one of the major attractions of XML for large corporations: the number of encodings supported by IBM's and Sun's XML parsers is testament to that.
Furthermore, it is not enough to merely allow different encodings. There must be both a mechanism to label the encoding when the operating system does not provide it (actually, I don't know if any operating system provides this) and a convention to resolve disputes in encodings as data passes through different network layers: for XML, the first is the XML header and the second is the rule that a higher-layer protocol has precedence over a lower. (The XML header has precedence over any markup in the document, and a MIME header has precedence over the XML header.)
The kites for SML currently only provide UTF-8. This is five years ahead of its time, in my estimation. It is very desirable that people move towards UTF-8 and UTF16 as a matter of priority; but SML is not the thing that will achieve it—XML will, by acting as a virus for Unicode alongside Java.
Whither SML?
Many of the current justifications offered for SML are unconvincing—the need for tiny SML for personal devices, electronic dictionaries, and WAP in phones. Or the need for even easier implementation, when so many public domain libraries and classes for XML are available. Or how our lives will be simplified by having more choice, when we can already choose our own simplified XML. I am happy for dialects like SML to exist as part of a methodology giving a well-thought-out way of using or analyzing or implementing XML, but I think we lose out if it is promoted as an alternative syntax outside XML. If the SMLers hatch some major difference in functionality and there are good reasons why this functionality should be expressed at the lexical level, and if XML cannot express it well, then there is certainly room for a non-XML grammar. But many things that look different turn out to be yet another structure that namespaces, fixed attributes or PIs are fine for.
So what areas is an SML suited for? Perhaps reverse engineering will give some clue:
- If it is UTF-8 only, it is not practical for local use for much non-Western data.
- If it allows other sets but does not allow this to be unambiguously labeled, is it not suitable for transnational use.
- If it does not include PIs, it is not suitable for server use (on the evidence that most server-side includes use special delimiters for what are in fact PIs).
- If it does not include some mechanism for literal text, it is not suitable for direct data entry.
- If it does not include syntactic distinction for the most common targets of tags (i.e., comments, elements, processing instructions, entity references), then people must introduce another layer straight away.
- If it does not have basic attribute defaulting, it must be bundled with some transformation language; so it is best for recipient systems that know the defaults..
It is good to see some creativity and ingenuity at work. There is nothing wrong with that recipe.
|
http://www.xml.com/pub/1999/12/sml/goldilocks.html
|
CC-MAIN-2017-43
|
refinedweb
| 2,719
| 53.81
|
The kill system call can be used to send any signal to any process group or process.
#include <sys/types.h>
#include <signal.h>
int kill(pid_t pid, int sig);
For more information checkout: man 2.
A process or thread can send a signal to itself with the raise function. The raise function takes just one parameter, a signal number.
In a single-threaded program it is equivalent to kill(getpid(), sig). In a multithreaded program it is equivalent to pthread_kill(pthread_self(), sig). If the signal causes a handler to be called, raise will only return after the signal handler has returned.
#include <signal.h>
int raise(int sig);
For more information checkout: man 3 raise
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
static volatile sig_atomic_t doneflag = 10;
static void setdoneflag(int signo) {
printf("\nIn SignalHandler - setdoneflag\n");
doneflag=0;
}
int main (void) {
signal(SIGINT, setdoneflag);
while(doneflag--)
{
printf("In While loop - %d\n",doneflag);
if(doneflag==5)
raise(2);
else
sleep(1);
}
printf("Program terminating ...\n");
return 0;
}
|
http://codingfreak.blogspot.com/search?updated-max=2009-09-08T15:02:00%2B05:30&max-results=5&reverse-paginate=true
|
CC-MAIN-2017-34
|
refinedweb
| 171
| 60.11
|
Subject: Fwd: [boost] Library devs only: Boost v2.x distro design questions
From: Mateusz Loskot (mateusz_at_[hidden])
Date: 2018-10-24 20:00:09
Hi Stefan, Christian,
Shall we agree on common ground for GIL?
I think it would be better to speak as a team than posting
individually - there's enough chaos already :-)
I personally am willing to leave the answer to Stefan,
as GIL team representative who is also interested
and experienced regarding general Boost development
direction, architecturing building, packaging configuration etc.
We are already at C++11, but I don't mind jumping to C++17.
Cheers,
Mat
---------- Forwarded message ---------
From: Niall Douglas via Boost <boost_at_[hidden]>
Date: Wed, 24 Oct 2018 at 21:12
Subject: [boost] Library devs only: Boost v2.x distro design questions
To: Boost Developers List <boost_at_[hidden]>
Cc: Niall Douglas <s_sourceforge_at_[hidden]>
Splitting this off from the other thread, can I get feedback from Boost
library maintainers ONLY. Not users, not non-maintainers.
Q0: Are you willing to do the work to adapt your library for any Boost
v2.x distro if it were to happen?
Q1: Would you prefer a new, separate Boost v2.x distro in parallel to
the v1.x distro, or to keep everything within one v1.x distro?
Q2: Would you be intending to keep your library inside Boost v1.x, move
it exclusively to Boost v2.x, or have it exist in both Boost v1.x and
v2.x but with different defaults configured? Also, would the version in
v1.x be hard forked from any v2.x edition i.e. the v1.x edition would
get orphaned?
Q3: What C++ standard should Boost v2.x's master build system be
defaulted to? C++ 11, 14, 17 or 20?
Q4: Should Boost v2.x use a boost2 namespace, or namespace boost {
inline namespace v2 { }}? (This affects whether Boost v2 and v1 editions
of your library can be used within the same translation unit)
Q5: What master buildsystem should Boost v2.x use? Boost.Build, cmake,
Build2, something else?
Q6: Should Boost v2.x's libraries auto integrate individually into some
package manager? If so, which ones do you intend to support?
Q7: Should Boost v2.x have official release versions done by release
managers, or should it be a rolling release of "whatever last passed the
CI @ 100%"? Note that you can have this, and have official release
versions of "especially known good" editions too.
Q8: Should Boost v2.x use a local HTML server to serve documentation,
and the static HTML docs be dispensed with as a requirement?
Q9: What are your feelings towards the use of Python to script
infrastructure and tooling in any Boost v2.x? For example, Python to run
a local HTML server to serve documentation locally, or Python to build a
release etc
Q10: What parts of core Boost are you utterly dependent upon, and would
absolutely need ported to any Boost v2.x as no STL alternatives exist?
I could go on, but let's stop there for now.
Niall
_______________________________________________
Unsubscribe & other changes:
-- Mateusz Loskot,
Boost list run by Boost-Gil-Owners
|
https://lists.boost.org/boost-gil/2018/10/0105.php
|
CC-MAIN-2019-04
|
refinedweb
| 524
| 67.96
|
I am seeing Outbound traffic from a DMZ host to a TOR exit node IP. It’s a TCP connection attempt but denied by the Firewall. But I do not understand what to comprehend from this traffic or where I should direct my investigation. Any advice is appreciated.
Tag: Tor
privacy – Is Tor Private/Secure?
Your privacy
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Is there a C/C++ Tor API?
I am looking for a (preferably Open Source) API usable in C/C++, that enables one to send/receive data over the Tor network. So I want to build an application that uses sockets to communicate data over a network, but I want that data to be sent over the Tor network instead. Is that possible? Is there a API for it?
firewalls – How to route all traffic through Tor and VPN?
So I am having some problems routing traffic through TOR & VPN.
My first issue:After installing anonsurf and routing all traffic through Tor in Kali everything works fine. However when I have my firewall enabled (gufw) anonsurf will not work when I try to surf the web. It cuts all traffic off to the internet. Does anyone know what the issue is here and why I can’t have anonsurf running as well as my firewall at the sametime?
My second issue is if I start anonsurf routing all traffic through tor after boot and then try to log in to my VPN (nord) I cannot connect to my VPN. The VPN only works if I do not have anonsurf enabled. If I start the VPN first and try to start anonsurf afterwards I get an error on anonsurf saying couldn’t modify resolv.conf operation not permitted. Is it possible to run anonsurf and a VPN at the same time and if so how would I properly do that?
Can Tor browser protect me from being found out?
Stack Exchange Network
Stack Exchange network consists of 176
60 times
I am using my computer, but am connected to my work’s WiFi. Can my company see what websites I visit when am using Tor browser?
luke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
4
NSA VPN Tor security what to do?
I know the NSA can see you if you are using TOR and if your exit node is in the 14 eyes, however if they break your Tor connection will your IP be shown as your VPN or can they crack that to? Also using a VPN that is based Outside the 14 eyes do you still have to connect to a server outside the 14 eyes or since the company is outside of eyes countries can you connect to a US server?……eyes
tor – Problem install OnionSwitch in kali?
I am having an issue installing OnionSwitch for Tor on Kali. I downloaded the zip from Git and and tried following the instructions listed at the Url.
When I tried installing it using a package manager it said that it could not install because it is not a debian package. So I tried to follow the instructions to build it and I just got the following error
sudo pyinstaller --windowed --icon=Icon/onionswitch_icon.ico --clean --name OnionSwitch.app OnionSwitch_GUI.py (sudo) password for adam: 41 INFO: PyInstaller: 3.5+498e6ee058 41 INFO: Python: 3.9.2 45 INFO: Platform: Linux-5.10.0-kali3-amd64-x86_64-with-glibc2.31 46 INFO: wrote /home/adam/Downloads/Debian_OnionSwitch_V1.3/OnionSwitch_V1.3/OnionSwitch.app.spec 54 INFO: UPX is available. 55 INFO: Removing temporary files and cleaning cache in /root/.cache/pyinstaller Traceback (most recent call last): File "/usr/bin/pyinstaller", line 11, in <module> load_entry_point('PyInstaller==3.5+498e6ee058', 'console_scripts', 'pyinstaller')() File "/usr/lib/python3/dist-packages/PyInstaller/__main__.py", line 111, in run run_build(pyi_config, spec_file, **vars(args)) File "/usr/lib/python3/dist-packages/PyInstaller/__main__.py", line 63, in run_build PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs) File "/usr/lib/python3/dist-packages/PyInstaller/building/build_main.py", line 844, in main build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build')) File "/usr/lib/python3/dist-packages/PyInstaller/building/build_main.py", line 791, in build exec(code, spec_namespace) File "/home/adam/Downloads/Debian_OnionSwitch_V1.3/OnionSwitch_V1.3/OnionSwitch.app.spec", line 6, in <module> a = Analysis(('OnionSwitch_GUI.py'), File "/usr/lib/python3/dist-packages/PyInstaller/building/build_main.py", line 190, in __init__ raise ValueError("script '%s' not found" % script) ValueError: script '/home/adam/Downloads/Debian_OnionSwitch_V1.3/OnionSwitch_V1.3/OnionSwitch_GUI.py' not found
Tor – Which torrc file do I need to edit to exclude the 14 eyes?
I have a question regarding the torrc files on my pc. I have a number of them and would like to know which one I need to edit to exclude certain countries. For instance if I would like to exclude the 14 eyes I would add
ExcludeNodes {},{}
Here is a list of all my torrc files on my pc
/etc/tor/.torrc.swo
/etc/tor/torrc
/etc/tor/torrc.orig
/home/adam/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/TorBrowser/Data/Tor/torrc
/home/adam/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/TorBrowser/Data/Tor/torrc-defaults
/home/adam/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/TorBrowser/Data/Tor/torrc.orig.1
/home/adam/Downloads/tor-browser-linux64-10.0.10_en-US/tor-browser_en-US/Browser/TorBrowser/Data/Tor/torrc
/home/adam/Downloads/tor-browser-linux64-10.0.10_en-US/tor-browser_en-US/Browser/TorBrowser/Data/Tor/torrc-defaults
/home/adam/kali-anonsurf/kali-anonsurf-deb-src/etc/tor/torrc.anon
/usr/share/doc/tor/torrc.sample.gz
/usr/share/man/man5/torrc.5.gz
/usr/share/tor/tor-service-defaults-torrc
/usr/share/tor/tor-service-defaults-torrc-instances
Thank you
tor – How can I use Bitcoin Core with the anonymous network protocol I2P?
Jon Atack answered this on Twitter. First install and start I2P (version 2.35 or above).
$ apt install i2pd $ systemctl enable i2pd.service $ systemctl start i2pd.service
In your
bitcoin.conf file add (I will assume you want to use both Tor and I2P but remove
debug=tor and
onlynet=onion if not interested in Tor):
debug=tor debug=i2p onlynet=onion onlynet=i2p i2psam=127.0.0.1:7656
There are 2 config options for I2P. Only the second one is required to be set for I2P to work.
-i2pacceptincoming If set and -i2psam is also set then incoming I2P connections are accepted via the SAM proxy. If this is not set but -i2psam is set then only outgoing connections will be made to the I2P network. Ignored if -i2psam is not set. Listening for incoming I2P connections is done through the SAM proxy, not by binding to a local address and port (default: 1) -i2psam=<ip:port> I2P SAM proxy to reach I2P peers and accept I2P connections (default: none)
Then to view your I2P peers:
watch -t ./src/bitcoin-cli -netinfo 4
This was enabled by Vasil Dimov’s Bitcoin Core PR 20685 merged in March 2021 and BIP 155 for addrv2 messages.
Installing tor 10.0.12 on Ubuntu 20.04
Anyone can lend a hand to a new user on what my problem is? I’ve put in “ euro apt install torbrowser-launcher “
It also says “package torbrowser launcher is not available but is referee to by another package “ and at the bottom “ E: package “torbrowser-launcher” has no candidate I tried multiple ways from different articles and have had no success ! Would appreciate it and thank you! If you guys have a working link on a how to that would be better !
|
https://proxies-free.com/tag/tor/
|
CC-MAIN-2021-21
|
refinedweb
| 1,306
| 58.18
|
Find the most significant image in an article.
Project description
A python clone of ImageResolver for finding significant images in HTML content See the excellent JS version at:
USAGE
import imageresolver import sys try: i = imageresolver.ImageResolver() i.register(imageresolver.FileExtensionResolver()) i.register(imageresolver.WebpageResolver(load_images=True, parser='lxml',blacklist='easylist.txt')) url = sys.argv[1] print(i.resolve(url)) except ImageResolverError: print("An error occured")
Differences From the Javascript Version
- methods return instead of calling callbacks
- WebpageResolver has lots of new options (see below)
- Added some debugging features
- Exceptions are raised rather than callback to an error function
WebpageResolver Additions
- rules syntax is now based on AdBlockPlus filters ()
- New rules can be added without writing a resolver
- blacklist image sources and whitelist
- Loads as little of the image as possible when fetching for image info. Stops downloading if diminsions are found or a setable limit is reached.
- The original rules from the JS version are still implemented. (see options)
ImageResolver() METHODS
__init__ (**kwargs)
Keyword options
- max_read_size - set to the maximum amount of bytes to read to find the width and height of an image. Default 10240
- chunk_size - set to the chunk size to read Default 1024
- read_all - set to read the entire image and then detect its info. Option will override max_read_size. Default False
- debug - set to enable debugging output (logger=”ImageResolver”). Default False
fetch (string url)
Fetches a URL and returns the response data.
fetch_image_info (string url)
Fetches an image url and examines the resulting image. Returns a tuple consisting of the detected file extension, the width and the height of the image.
register (instance filter)
Register a filter to examine an image with. The filter argument must be an instance of a class that has a resolve() method. resolve() must accept a string URL and must return a url or None
resolve (string url)
Loop through each registered filter until a url is resolved by one of them. If no url is found, returns None
FileExtensionResolver() METHODS
resolve (string url)
Returns the url if the extention matches a possible image
WebpageResolver() METHODS
The work-horse of this module. Our uses revolve mostly around this filter and thus it is the most feature complete and tested.
__init__ (**kwargs)
Initialize the class with options:
- load_image - set to true to load the first 1k of images whose size is not set in HTML. Default False
- use_js_ruleset - set to true to use the original rules from the Javascript version. Default False
- use_adblock_filters - set to false to disable adblock filters. Default True
- parser - set to a BeautifulSoup compatable parser (lxml is recommended). Default html.parser
- blacklist - set to a file containing AdBlockPlus style filters used to lower an image’s score. Default blacklist.txt
- whiltelist - set to a file containing AdBlockPlus style filters used to raise an image’s score. Default whitelist.txt
- significant_surface - Amount of surface (width x height) of the image required to add additional scoring
- boost_jpeg - add (int) boost score to JPEG files. Default 1
- boost_gif - add (int) boost score to GIF files. Default 0
- boost_png - add (int) boost score to PNG files. Default 0
- skip_fetch_errors - Skip exceptions raised by fetch_image_info(). Exceptions are logged and the image will be skipped. Default True
The default parser for BeautifulSoup is html.parser which is built-in to python. We highly recommend you install lxml and pass parser=”lxml” to WebpageResolver(). In our testing we found that it was much faster and more accurate.
LOGGING
Use the name “ImageResolver” to configure a logger. Skipped exceptions will be logged to this logger’s error output and when enabled, debugging output as well.
EXCEPTIONS
ImageResolverError
Base exception for other exceptions below.
ImageInfoException
Raised if the image could not be read or type, width or height properties return undefined. By default this exception is skipped and logged but can be enabled with “skip_fetch_errors=False” option in WebpageResolver
HTTPException
Raised if the image could not be loaded from the URL. By default this exception is skipped and logged but can be enabled with “skip_fetch_errors=False” option in WebpageResolver
TODO
Need to implement better caching. Future plan is to include a configurable cache method so images seen across sessions can be cached for better performance
BUGS
Probably. Send us an email or a patch if you find one
Original idea and basic setup came from Maurice Svay
Image detection came from the bfg-pages project
Reading AdBlock Plus filters forked from
LICENSE
Some of the source libraries are licensed with the BSD license. To avoid license messiness we’ve chosen to release this software as BSD as well. The easylist.txt provided by AdBlockPlus is licensed as GPL and it should be updated regularly anyway. For these reasons we have chosen not to include the file in the package. You can pass it as the “blacklist” or “whitelist” parameter to the Webpageresolver
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ImageResolver/
|
CC-MAIN-2021-39
|
refinedweb
| 834
| 53.92
|
ICANN Officially Approves .jobs and .travel TLD's 263
EyeMyke writes "As reported on News.com, ICANN has approved the .jobs and .travel domains, and is pending decision on .asia, .mail, .tel, and .xxx. One has to ask 'Will these new domains actually prove useful, or is ICANN just avoiding the real issues confronting them in regards to regulating domain registration?'" We've covered both of these domains before, but it would seem they are even more-approved now, or at least the process is important enough to warrant an official announcement from ICANN.
I can't believe they added .jobs (Score:5, Funny)
Re:I can't believe they added .jobs (Score:5, Funny)
Re:I can't believe they added .jobs (Score:2)
Re:I can't believe they added .jobs (Score:3, Funny)
Re:I can't believe they added .jobs (Score:2)
Are these really useful? (Score:3, Insightful)
Re:Are these really useful? (Score:4, Informative)
John.
Re:Are these really useful? (Score:3, Interesting)
Montreal's transit system is [stm.info]. It's very useful and informative.
The English writing at the above link (which is a history of the Metro system - i.e., subway) is excellent.
Re:Are these really useful? (Score:4, Funny)
etc.
And for ICANN,
Yes... (Score:2, Insightful)
Oh damn, I've depressed myself.
Re:Are these really useful? (Score:2, Funny)
Re:Are these really useful? (Score:2)
Hey if it screws any of those guys who buy up domain names to sell with no intention of ever using them I say huzzah. Maybe the really good names should be assigned like chairs in a band.
"Todays feature bouts are Western Digital challenging Maxtor for harddrive.com...IBM vs Tinkerbell for pixiedust.com...and Rush Limbaugh vs Jerry Falwell for bigfatliar.org...todays special grudge match pits Bill Gates vs Steve Jobs for parc.rip"
May the best site win.
billy - wondering if blo.job is still available
Oh, I see 'em all right (Score:3, Insightful)
Actually, I see it most often on business cards from extremely small businesses. The kind where FooCorp has taken the
Like another p
Re:Are these really useful? (Score:2, Insightful)
Re:Are these really useful? (Score:3, Insightful)
This is exactly backwards. If you own vettemph.com, you already own, info.vettemph.com, doctor.vettemph.com, xxx
Re:Are these really useful? (Score:3, Informative)
Re:Are these really useful? (Score:3, Informative)
Re:Are these really useful? (Score:2, Interesting)
Names (Score:5, Funny)
I'm thinking... (Score:5, Funny)
hand.jobs
head.jobs
Wait, let me get my credit card number.
Re:I'm thinking... (Score:2, Funny)
Re:Names (Score:2)
As far as I'm aware, single character domain names aren't allowed any more. Sorry.
none under three? (Score:2)
this was his slick way of reminding me I was jewish, which only bothered him and not me. I knew he was lying because I already tried to register a **.ws domain and I was told it was too short.
Re:none under three? (Score:2)
Re:Names (Score:2)
Re:Names (Score:2)
hot.grits.xxx
ob domain (Score:2)
C'mon, what about boob.jobs? This is Slashdot after all . .
.
Re:Names (Score:3, Interesting)
Which brings up an interesting question: can ICANN just usurp New.net's market in
.xxx domains without compensating New.net for the destruction of their business? I believe this already happened with the .biz extension [biztld.net], which was originally offered by Atlantic Root Network.
- Ryan
Not enough (Score:5, Insightful)
Re:Not enough (Score:3, Insightful)
Re:Not enough (Score:2)
I don't really see this as a problem. There is a lot of legislation in place to keep people away from trademark issues, so I say let people register whatever name they want, just don't be evil.
Re:Not enough (Score:2)
This is true within the 8 hierarchies historically known as Usenet (comp, humanities, misc, news, rec, soc, sci, talk) and within many other hierarchies but there is certainly no authoritative list of top-level names.
Right, but there's a hierarchy of authoritative name information.
Re:Not enough (Score:2, Funny)
Re:Not enough (Score:5, Informative)
you'll find there are MANY alternate tld schemes, including one which was using
You can freely use any of these servers instead of your ISP's nameservers if you want to be able to see ALL the domain names on the net, not just the ones in the US Government controlled nameservers. You will, of course be able to see all the "old" domain names like
for one example.
Re:Not enough (Score:3, Insightful)
None at all. That's the way it currently works. Only ICANN has the capability
:-)
Re:Not enough (Score:3, Insightful)
Monster.jobs? (Score:3, Insightful)
TLDs should be optional (Score:5, Insightful)
Citing monster.com as an example again: it is a jobs site, so it should get a
I shouldn't care whether the site I want is a network, a company, or a non-profit organisation; usually I just want to get to the site.
Re:TLDs should be optional (Score:4, Insightful)
No, it shows that the TLD idea is from a time when net users were expected to have minimum tech knowledge. Once AOL entered the picture, the idea became too complicated for the average luser and broke down.
Re:TLDs should be optional (Score:2)
Re:Monster.jobs? (Score:4, Funny)
I can't imagine a big push by webmasters to move their visitors off their primary websites onto another domain. They'd suddenly have a pack of extra issues to deal with, like realizing that their current authentication cookies that are set for
.example.com won't carry across to their new site.
I can almost see the utility in adding a small set of additional TLDs (as opposed to opening the TLDs and being done with it), but the ones they pick are invariably stupid special-interest projects that maybe 15 people will use. I could understand one for, say,
.rest for the millions of restaurants out there. Why they labor and strive to bring us .giantballofstring is beyond me.
Re: (Score:4, Insightful)
It only seems to do that on domains that are entered by hand. It didn't do that when clicking on the URL that you posted.
The really sneaky thing is that whenever a
.jobs server goes down or there's a typo, Monster will get the traffic instead, and will no warning (in Firefox) to the user.
.xxx is potentially bad news. (Score:5, Interesting)
If the point of
Blarg.
Re:.xxx is potentially bad news. (Score:3, Insightful)
Re:.xxx is potentially bad news. (Score:2)
now..
Re:.xxx is potentially bad news. (Score:3, Insightful)
Let's say hustler.com simply becamed a redirect to huster.xxx. Any school/library filter would instantly know it's a no-no site, without having some schlub at the filter company check to see if its about tricking people on the basketball court.
For various reasons, FORCING sites onto the xxx tld would be a bad thing - you'd have to put someone in charge of deciding what's pornographic, or adult, and that
Re:.xxx is potentially bad news. (Score:3, Informative)
I really believe someone is going to eventually introduce legislation forcing adult content onto
Re:.xxx is potentially bad news. (Score:3, Funny)
Exactly. And one man's adult content is another man's vacation photos from a topless beach.
Re:.xxx is potentially bad news. (Score:2)
Easy blacklisting, for school and library filters. Also, for scrupulous porn sites, it's good marketing.
[...]
For the seedy sites that try to fool you into visiting them, or that pretend to be one thing when they're really another, this doesn't do much.
Exactly.
Re:.xxx is potentially bad news. (Score:2)
Re:.xxx is potentially bad news. (Score:2)
John.
Re:.xxx is potentially bad news. (Score:5, Interesting)
Maybe the first step is to just create the xxx domain.
What I'd do (if I was ICANN or whomever):
* Create the xxx tld and then take a select bunch of dictionary words and sell them to pr0n friendly bandwidth/hosting providers.
* Make them the official registratrs for "free.xxx", "anal.xxx" or whatever.
* This creates more imaginative and available pr0n namespace (pr0n is always going to be number one)
* Allow ISP's or OS vendors to block xxx domain
* Stop accepting pr0n sites for
** Don't get into censorship, unspoken agreement.
* Allow pr0n marketing on other tld's...
** Direct community of pr0n content producers to use the xxx tld when the content is obscene
** Example: playboy.tv is a site about the channel, but it doesn't have nude content. When you want nude you click into xxx tld, unless your parents blocked it.
** Example (2): Playboy.com is a portal to all xxx tlds but contains magazine content; all images are hosted on a xxx tld.
* Only send form letters to
I think we can solve the "censorship" problem a lot easier that way. Make it transparent to content producers who want to go wild and they will adopt it fast, and that is all it takes. Build the porn, they will come (no pun intended).
Some say this approach will cause it to become USENET 2010, but I say that depends on content producers. If they want to create pointy.boobies.bobs.image.xxx let them - but they will learn right away that won't work.
Re:.xxx is potentially bad news. (Score:5, Insightful)
whitehouse.org [whitehouse.org]
nice-tits.org [nice-tits.org]
Both not really the sites you might expect
Re:.xxx is potentially bad news. (Score:2)
Even with Firefox's LinkPreview [mozilla.org] you can't tell that whitehouse.org is not what you might think it should be.
Dot triple X could be dangerous (Score:2, Interesting)
If the dot triple X domain is approved, the following worst case scenario is possible.
I don't believe it will actually happen because so far, the people have always held freedom of speech up to be very important.
Note, this is a worst case scenario. I don't believe it's all that likely to happen.
Popular websites, such as playboy.com
It's not a dupe! (Score:5, Funny)
Whoah, back off people he's just applied a +4 magical dupe shield. Now we can't use our 4th level 'Cry of dupe' scroll.
.COM is way too intuitive (Score:2, Insightful)
First because people can spell whitehouse.com better than whitehouse.xxx, due to the O
Second,
When
Re:.COM is way too intuitive (Score:2)
Can we have a .dot, too while you're at it? (Score:5, Funny) sounds great. Like morse code or something.
Re:Can we have a .dot, too while you're at it? (Score:3, Informative)
Re:Can we have a .dot, too while you're at it? (Score:3, Funny)
Fabulous! Typing that URL is a superb introduction to Slashdot's essentially redundant nature. Soon you'll be posting stories that rephrase actual news sites, enjoying CmdrTaco repost them, reaffirming the evils of draconian copyright law in any mildly appropriate article and reusing the Soviet Russia meme. It all starts with the slash, followed by a slash, followed by yet another slash.
I recall "Atlanta's best polka band"... (Score:2)
But of course that's not really their website.
No no no (Score:5, Funny)
Plus it'd be really easy for goverments to censor them all in one fell swoop!
Re:No no no (Score:2)
the purpose of these TLDs is obvious (Score:2)
But I'm probably just seeing the glass as half full aren't I?
At this point... (Score:4, Insightful)
- By language (.it,
- By kind (and assumed language was english (.org,
The first category is ok and works well. But then we come to the second. Having these 4 original category:
So why can't people just use the 2nd level domain to describe who they are? The TLD is already composed of enough entries to distinguish the category.
Slashdot should be
Re:At this point... (Score:3, Informative)
What language do they speak in
Originally, non-international websites were meant to use their own countries (correct me if I'm wrong, but isn't there a
Its not even consistent between countries. In Australia, th
Re:At this point... (Score:3, Funny)
Re:At this point... (Score:2)
I know you wanted to say country and not language.
Let each country have its way. Put all the com, net, org, biz, sex, whatever in
Are you a large company that operates in a lot of countries? Register in each and every country. I am sure there will be companies who will help you do this.
Are you a small company working in only one country? register just there. I can even imagine that prices are lower if you regi
,net is (was) for Internet infrastructure... (Score:2)
I think these have always been arbitrary, since anyone can get a
While
Re:At this point... (Score:2, Interesting)
Re:At this point... (Score:2)
We have
.gov, instead of .gov.us, because the US Government is effectively governing the whole world (save for some few stubborn "renegate countries" they would like to govern as well). Other countries would use .gov.ccTLD to designate their *local* governments. Oh, and the french would not accept the english "gov" and would opt for .gouv.fr ;-)
As for
.mil: it's a wonder we don't have the TLDs .usaf, .army and .navy as well, considering how much they are competing against each other!
Re:At this point... (Score:2)
Unless you are meaning '.gov' or '.mil' without the country code? If so, ignore me please
These TLD are meaningless (Score:2)
Let the UN take ICANN over and start selling global trademarks. So IBM becomes just
With that local trademarks set right under...
Re:These TLD are meaningless (Score:2)
Let the UN take ICANN over and start selling global trademarks.
The UN as a merchant? Oh no! Everything else would be far better than this!
If the UN starts selling things, shouldn't they be using the un.com domain instead of the
.int tld?
.SEX, .XXX, .KIDS Restrict Freedom of Speech (Score:4, Insightful)
--------
Date: Fri, 13 Oct 2000 23:52:50 -0400
Sender: Owner-Domain-Policy
From: Ron Bennett
Subject: [ICANN COMMENT]
To: DOMAIN-POLICY@LISTS.NETSOL.COM
First the problems with the proposed
The proposed TLDs
Ron Bennett
bennett@wyomissing.com
--------
More posts and comments by others involved in domain name policy regarding these issues:
.xxx name clashes? (Score:2, Insightful)
Well... (Score:2, Funny)
.travelcom? .jobscom.com? .xxx.cum.com? (Score:3, Insightful)
(Personally, I almost never care where a domain is hosted, because I use Google to search for my "bookmarks" and rarely type-in domain URLs manually. The new TLDs are just a limited opportunity for registrars to print new money, IMO.)
.mail (Score:2)
Re:.mail (Score:2)
The you probably also would like a
Re:.mail (Score:2)
.net was originally meant to be used by ISPs and network infrastructure providers.
.isp would be just redundant. Oh, .net became meaningless and open the the general public? Too bad...
Registration. (Score:2)
Oh yeah... this will fly. (Score:2)
----
Hey! Check out this cool website! [tinyurl.com]
----
problem (Score:2)
The proposed act to force adult sites to use
This is completely rediculous. (Score:2, Insightful)
My question is... (Score:2)
Re:My question is... (Score:2)
.aslongasthednsspecificationallowswhichisalot
Marketing: Vapors Into Cash (Score:2)
Or, is ICANN just printing up more scrip for the company store? Minting new tokens, marketing them to collectors? Inventing a new fantasy baseball team in order to sell baseball cards?
Yesterday we could choose from five types of breakfast cereal, all more or less same other than packaging. Today we pick from fifty different kinds of br
Who'll be first to register... (Score:2, Funny)
hand.jobs
time.travel
youth-in.asia
hot.mail
hot.xxx
in.tel
ordering-wives-from.asia
etc, etc...
Shortest or weirdest TLDs? (Score:2)
The shortest TLDs are 2-letter ccTLDs. Why don't we consider 1 letter TLDs as well:
.a, .b, etc...?
Incidentally, the german telecom, which changes names like a chameleon every now and then, called itself something like t... a few years ago (or they used this as a logo?), and they seriously asked if they could register 't...' as a domain name. Oh well, they've changed names again since then.
Is DNS outdated? (Score:2)
Forgive my ignorance but isn't the DNS hierarchy a throwback to the days when bandwidth was low and most data was kept in flat files?
With modern database technology and highspeed connectivity how difficult would it be to map an IP address to any name with or without any number of dots distributed over a couple of hundred strategically placed name servers? Like a global telephone book.
Re:Is DNS outdated? (Score:2)
The telephone system in most countries already works that way: it maps an arbitrary telephone number to a real physical slot number. That's the reason people can keep their telephone numbers even when moving to a new location.
Re:Is DNS outdated? (Score:2)
That's how DNS works, except that we've all agreed to use the same list of 13 root nameservers controlled by ICANN that publish the same data. It's almost trivially easy for a DNS admin to add or remove servers from that list, but there's a bootstrapping
Why bother with other that com/net/org's? (Score:2)
Is there any real reason for these? It seems the demans remains for
Re:Having made a huge mess already... (Score:2)
Re:Having made a huge mess already... (Score:2)
"Is that what the Internet is about? Jobs, leisure and pr0n?"
Of course not. You forgot free stuff.
billy - yes...lets NEVER forget free stuff
Re:.xxx domain (Score:3, Insightful)
Sure, filtering out hardcore pornography would be made a lot easier if all sites were force
Re:Squatting? (Score:2)
5 years from now we wont even know what to type in (Score:3, Insightful)
|
https://slashdot.org/story/05/04/09/1310251/icann-officially-approves-jobs-and-travel-tlds
|
CC-MAIN-2018-22
|
refinedweb
| 3,096
| 68.87
|
Ticket #1879 (closed enhancement: wontfix)
Identity should use a sql query to get the user's permissions
Description
Currently Identity iterates through all of a user's groups and individually pulling all the permissions for each group and then adding them to a set before returning them.
It should instead use an appropriate query that will return the unique permissions from all the user's groups.
Christoph Zwerschke suggested two SA methods for doing this:
def permissions(self):
return set(Permission.query.distinct().join(['groups',
'users']).filter_by(user_id=self.user_id))
def permissions(self):
return set(Permission.query.filter(Permission.groups.any(
Group.users.any(User.user_id==self.user_id))).all())
The disadvantage of the first is that the join will gather all the duplicate permissions and then strip them out with the distinct clause. This is probably slower than the second method that uses an exist clause.
The problem with the second is that it is not compatible with SA 0.3.x.
Attachments
Change History
comment:2 Changed 11 years ago by faide
more info in this thread:
comment:3 Changed 11 years ago by Janzert
Before working on trying to get a SA 0.3 compatible statement I decided to check the speed difference of the different options. To my surprise the current method of iterating through the groups in python was much faster.
python -m timeit -s"import perm_test;perm_test.iter()" "perm_test.iter()" 10000 loops, best of 3: 94.9 usec per loop
python -m timeit -s"import perm_test;perm_test.join()" "perm_test.join()" 100 loops, best of 3: 13.1 msec per loop
python -m timeit -s"import perm_test;perm_test.exist()" "perm_test.exist()" 100 loops, best of 3: 14.7 msec per loop
These were done using SA 0.5 and postgresql 8.3.1. I also tested with sqlite and the results were very similiar.
In order to test I initialized the database with one user having 20 groups each group having 11 out of 60 permissions. I will attach the project with the scripts I used to test. To initialize the database first do "tg-admin sql create" then "python init.py". Then the timings can be tested using the commandlines above. You can also vary the number of groups, permissions and permissions per group in the init.py script.
Unless someone else shows why my timing method is wrong or comes out different for them I'll go ahead and close this in a few days.
Changed 11 years ago by Janzert
- attachment dbtest.zip
added
quickstarted project with scripts to test query speeds.
comment:4 Changed 11 years ago by chrisz
I can reproduce the numbers with your test, but as I wrote in the thread above, the test is not realistic because it queries the permissions in a loop in the same database session, where the group.permissions are all kept in memory. In reality, the requests run in different sessions and need to requery the database. If you simulate this using turbogears.database.session.expire_all(), then the results reverse - the old method is then 2-3 times slower. As suggested in the thread above, we may consider some caching to cover the case that permissions are queried multiple times in the same database session.
I can also reproduce that the second method using exists is somewhat slower than the method using join (no matter whether I use SQLite or Postgres 8.3, or whether I add additional indices and ANALYZE the tables), contrary to what we expected. Maybe this is because the database cannot optimize the chained exists clause for using indices.
comment:5 Changed 11 years ago by chrisz
This is the SQL query used in the first method:
SELECT DISTINCT permission_name FROM permission JOIN group_permission USING (permission_id) JOIN user_group USING (group_id) WHERE user_id = $1
And this is the SQL query used in the second method:
SELECT permission_name FROM permission WHERE EXISTS (SELECT 1 FROM group_permission WHERE permission_id = permission.permission_id AND EXISTS (SELECT 1 FROM user_group WHERE group_id = group_permission.group_id AND user_id = $1))
(The actual queries issued by SQLAlchemy are a bit more complicated because tg_user and tg_group are joined which is actually not necessary).
I tested these with PostgreSQL 8.3 and even larger numbers of groups and permissions, and it turned out that the second query is indeed much slower (of course I did an analyze on the database so that it can do an index scan).
I then did another test with the following third SQL query:
SELECT permission_name FROM permission WHERE permission_id IN (SELECT permission_id FROM group_permission WHERE group_id IN (SELECT group_id FROM user_group WHERE user_id = $1))
It turned out that this query is even faster than the others, contrary to all we learned in school about how bad "in" is compared with "exists".
Does anybody have a good explanation for this and an idea how to elegantly express the last query with SQLAlchemy?
comment:6 Changed 11 years ago by Janzert
It's certainly not anything close to elegant, but at least to get started here is a method that will use the "in" query to get the permissions.
@property def in_perms(self): from sqlalchemy import sql g_ids = sql.select([user_group_table.c.group_id], user_group_table.c.user_id == self.user_id) p_ids = sql.select([group_permission_table.c.permission_id], group_permission_table.c.group_id.in_(g_ids)) return set(Permission.query.filter( Permission.permission_id.in_(p_ids)))
Also only tested with 0.5 and guessing that it needs >=0.4.
It does slightly beat out the join query (~14ms to ~15.5ms).
Also as a side note, explicitly caching the permissions results in it being about 10,000 times faster for subsequent checks. I'd be a bit worried about ending up with stale results if someone hangs onto a User object long term though.
comment:7 Changed 11 years ago by chrisz
That doesn't look so bad. I find it a bit more readable if you rewrite it as below (and this also works with SA 0.3.10):
from sqlalchemy.sql import select ... @property def permissions(self): perm = permissions_table.c group_perm = group_permission_table.c user_group = user_group_table.c return set(Permission.query.filter(perm.permission_id.in_( select([group_perm.permission_id], group_perm.group_id.in_( select([user_group.group_id], user_group.user_id == self.user_id))))))
The database sessions usually last only one request, so I wouldn't worry about stale results. It may be even an advantage if permissions cannot change during one request.
comment:8 Changed 11 years ago by Janzert
Yes, your formulation is certainly better.
I also would like to change to returning a frozenset to emphasize that this is a read-only view of the permissions.
Changing that and adding caching it ends up as:
@property def permissions(self): """ Get the User's permissions. This just returns a static view of the permissions and cannot be used to change them. """ try: return self._permissions except AttributeError: perm = permissions_table.c group_perm = group_permission_table.c user_group = user_group_table.c p = frozenset(Permission.query.filter(perm.permission_id.in_( select([group_perm.permission_id], group_perm.group_id.in_( select([user_group.group_id], user_group.user_id == self.user_id)))))) self._permissions = p return p
If that looks good to everyone I'll work on getting a patch together for the quickstart templates. I'll be out of town for the latter half of the week though so it'll probably be next week before I have anything.
comment:9 Changed 11 years ago by chrisz
Returning a frozenset makes sense (the identity methods already do that).
We'll probably not want to patch this in TG 1.0 (only critical bugfixes there), so you should base your patch on TG 1.1/1.5.
Btw, we should also make a similar change to the SQLObject model.
comment:10 Changed 11 years ago by Janzert
Yep, I'm working with a checkout of the 1.1 branch.
I can't find any way to get access to the intermediate tables in order to use sqlobject's sqlbuilder. Raw sql can be used, which gives the following:
_get_permissions(self): try: return self._permissions except AttributeError: p = frozenset(Permission.select("""permission.id IN ( SELECT group_permission.permission_id FROM group_permission WHERE group_permission.group_id IN ( SELECT user_group.group_id FROM user_group, tg_user WHERE user_id = tg_user.id))""")) self._permissions = p return p
This seems like fairly straight up sql but I wonder if it will run into database compatibility problems?
I also still need to take a look at an Elixir version, but hopefully that can be based off the sqlalchemy one.
comment:11 Changed 11 years ago by Janzert
The following works for Elixir but is quite ugly to get a hold of the intermediary tables.
@property def permissions(self): perm = Permission.c user_group = self._descriptor.properties['groups'].secondary.c group_perm = Permission._descriptor.properties['groups'].secondary.c p = frozenset(Permission.query.filter(perm.permission_id.in_( select([group_perm.permission_permission_id], group_perm.tg_group_group_id.in_( select([user_group.tg_group_group_id], user_group.tg_user_user_id == self.user_id)))))) self._permissions = p return p
I'm thinking at this point that unless someone can find a cleaner way to do this for sqlobject and elixir, maybe this should only be changed in sqlalchemy.
comment:12 Changed 11 years ago by faide
- Milestone changed from 1.5 to 1.1
comment:13 Changed 11 years ago by faide
- Milestone changed from 1.1 to 1.1 maintenance
Is that worth the effort? Did someone bench the real implementation to get an idea?
comment:14 Changed 10 years ago by Chris Arndt
I'd say if nobody comes up with an implementation soon, then close this ticket. It's working as it is after all. And if somebody really needs more speed, it's always possible to write a custom identity provider.
comment:15 Changed 10 years ago by Janzert
I basically gave up on this because there doesn't appear to be a clean way to make the queries in Elixir or SQLObject. I could make a patch for just SA if wanted, otherwise this should just be closed.
comment:16 Changed 10 years ago by Janzert
- Status changed from new to closed
- Resolution set to wontfix
Marking wontfix as no one has enough interest to make the needed changes and it's likely a very small effect in most applications anyway.
|
http://trac.turbogears.org/ticket/1879
|
CC-MAIN-2019-22
|
refinedweb
| 1,694
| 57.16
|
I am trying to find a way to break the split the lines of text in a scanned document that has been adaptive thresholded. Right now, I am storing the pixel values of the document as unsigned ints from 0 to 255, and I am taking the average of the pixels in each line, and I split the lines into ranges based on whether the average of the pixels values is larger than 250, and then I take the median of each range of lines for which this holds. However, this methods sometimes fails, as there can be black splotches on the image.
Is there a more noise-resistant way to do this task?
EDIT: Here is some code. "warped" is the name of the original image, "cuts" is where I want to split the image.
warped = threshold_adaptive(warped, 250, offset = 10)
warped = warped.astype("uint8") * 255
# get areas where we can split image on whitespace to make OCR more accurate
color_level = np.array([np.sum(line) / len(line) for line in warped])
cuts = []
i = 0
while(i < len(color_level)):
if color_level[i] > 250:
begin = i
while(color_level[i] > 250):
i += 1
cuts.append((i + begin)/2) # middle of the whitespace region
else:
i += 1
From your input image, you need to make text as white, and background as black
You need then to compute the rotation angle of your bill. A simple approach is to find the
minAreaRect of all white points (
findNonZero), and you get:
Then you can rotate your bill, so that text is horizontal:
Now you can compute horizontal projection (
reduce). You can take the average value in each line. Apply a threshold
th on the histogram to account for some noise in the image (here I used
0, i.e. no noise). Lines with only background will have a value
>0, text lines will have value
0 in the histogram. Then take the average bin coordinate of each continuous sequence of white bins in the histogram. That will be the
y coordinate of your lines:
Here the code. It's in C++, but since most of the work is with OpenCV functions, it should be easy convertible to Python. At least, you can use this as a reference:
#include <opencv2/opencv.hpp> using namespace cv; using namespace std; int main() { // Read image Mat3b img = imread("path_to_image"); // Binarize image. Text is white, background is black Mat1b bin; cvtColor(img, bin, COLOR_BGR2GRAY); bin = bin < 200; // Find all white pixels vector<Point> pts; findNonZero(bin, pts); // Get rotated rect of white pixels RotatedRect box = minAreaRect(pts); if (box.size.width > box.size.height) { swap(box.size.width, box.size.height); box.angle += 90.f; } Point2f vertices[4]; box.points(vertices); for (int i = 0; i < 4; ++i) { line(img, vertices[i], vertices[(i + 1) % 4], Scalar(0, 255, 0)); } // Rotate the image according to the found angle Mat1b rotated; Mat M = getRotationMatrix2D(box.center, box.angle, 1.0); warpAffine(bin, rotated, M, bin.size()); // Compute horizontal projections Mat1f horProj; reduce(rotated, horProj, 1, CV_REDUCE_AVG); // Remove noise in histogram. White bins identify space lines, black bins identify text lines float th = 0; Mat1b hist = horProj <= th; // Get mean coordinate of white white pixels groups vector<int> ycoords; int y = 0; int count = 0; bool isSpace = false; for (int i = 0; i < rotated.rows; ++i) { if (!isSpace) { if (hist(i)) { isSpace = true; count = 1; y = i; } } else { if (!hist(i)) { isSpace = false; ycoords.push_back(y / count); } else { y += i; count++; } } } // Draw line as final result Mat3b result; cvtColor(rotated, result, COLOR_GRAY2BGR); for (int i = 0; i < ycoords.size(); ++i) { line(result, Point(0, ycoords[i]), Point(result.cols, ycoords[i]), Scalar(0, 255, 0)); } return 0; }
|
https://codedump.io/share/0osxbtlIqCF7/1/split-text-lines-in-scanned-document
|
CC-MAIN-2018-09
|
refinedweb
| 616
| 70.43
|
Does opencv_dnn use gpu?
I tried to run the code in Jetson GPU but I can't?
I just obtained 5 fps, this means that use CPU not GPU
this is my full code
from imutils.video import VideoStream from imutils.video import FPS import numpy as np import argparse import imutils import time import cv2 CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe('prototxt.txt', 'caffemodel') # deep neural network ( dnn ) print("[INFO] starting video stream...") vs = cv2.VideoCapture(0) vs.release() vs = cv2.VideoCapture(0) time.sleep(2.0) fps = FPS().start() while True: ret, frame = vs.read() # add ret, frame = imutils.resize(frame, width=400) (h, w) = frame.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 0.007843, (300, 300), 127.5) net.setInput(blob) detections = net.forward() big_area = 0 big_center = 320 detected = 0 # loop over the detections for i in np.arange(0, detections.shape[2]): confidence = detections[0, 0, i, 2] object_type = int(detections[0, 0, i, 1]) if object_type == 15 and confidence > 0.2: # it was if confidence > args["confidence"]: box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") label = "{}: {:.2f}%".format('person', confidence * 100) cv2.rectangle(frame, (startX, startY), (endX, endY), [0, 0, 255], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(frame, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, [0, 0, 255], 2) rect_area = (endX - startX) * (endY - startY) detected = 1 if rect_area > big_area: # big_area and big_center are used so that the TurtleBot always tracks the closest person big_area = rect_area cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF if key == ord("q"): break fps.update() # stop the timer and display FPS information fps.stop() vs.release() cv2.destroyAllWindows()
any ideas or suggestions?
thank you in advance
I found this usueful:...
thanks, can I do it but it is bad, I saw that the CPU is faster than GPU with MobileNet SSD, right?...
Note that the benchmarks were carried out on a desktop/mobile CPU. They might not be applicable to embedded boards. Jetson can probably perform better with the CUDA backend than the on-board CPU. CUDA support for the DNN module was merged into master a few days ago. You have to use
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)and
net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA)or
cv2.dnn.DNN_TARGET_CUDA_FP16if your device supports it.
@Yashas, thank you so much for your help, I was frustrated, but when I read your response, change my feeling, for more details about my system, you can see here, what is wrong???
@redhwan your question was posted before the CUDA support was merged into master. I preassume that your OpenCV build doesn't have the CUDA backend. To use the CUDA backend, you need to build the master branch from OpenCV GitHub repository.
@Yashas, I have one question about building with CUDA based on your gist. The two first points
WITH_CUDAand
WITH_CUDNNare Cmake parameters I guess. Then there is The CUDA backend is enabled by setting the following option:
OPENCV_DNN_CUDAWhere in the build process do we specify that option?
@Erik Langskjegg
OPENCV_DNN_CUDAis also a CMake option just like the other two.
@Yashas, I am confused about some things. So, I have some questions? first one: you mean that master is 4.x releases and branch is 3.4.x releases??? I got it here, right? Second one: do the steps of building a master differ from the branch? I followed this blog post hereon my PC with
"3.3.1 version"and here on JetsonTX2. Third one: can I run the
OpenCV dnnwith CUDA Version: 9 or not? you mentioned cuDNN (min: 7.5.0) here if not, I need to use everything similar your system on my PC.
Sorry, @Yashas, I have other questions, you wrote above "you need to build the master branch from OpenCV GitHub repository", you mean these here and here or not ?? another issue I noticed here in your tables that MobileNet SSD is very bad with GPU, Inference Engine CPU was the best one?? can you give me some tips about how to use it? my PC: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz and NVIDIA GTX 1050 Ti
@redhwan The CUDA backend hasn't been released yet. It's available on the master branch (where stuff for the next release are kept). You need to download (or clone) the OpenCV repository and build it. You can find more information about installing Inference Engine here.
|
https://answers.opencv.org/question/219979/does-opencv_dnn-use-gpu/?sort=votes
|
CC-MAIN-2020-05
|
refinedweb
| 785
| 68.87
|
possible bug in portal 2.2Matthew Purdy Dec 25, 2005 1:10 AM
i know jsps/servlets/ejbs well; and have used tomcat and jboss as for a long time. however, i am new jboss portal, and really new to portlets. i started exploring with portlets this month. i started using jboss portal 2.0, and now i am using jboss portal 2.2. do to my ignorance on the subject matter, i am not sure if i discovered a bug in portal 2.0 that was fix in portal 2.2. or a new bug in portal 2.2.
my env:
portal 2.0, jboss as 4.0.2, mysql 4.1
portal 2.2, jboss as 4.0.3, mysql 4.1
here is was I discovered:
What was done:
The TestRequestResponsePortlet sets two message in processAction. The first one "message", uses request.setAttribute; the second one "message2" uses response.setRenderParameter. In doView, thoses two messages are read. The first one "message" is read with request.getAttribute. The second one is read with request.getParameter.
The results:
When using JBoss Portal 2.0, both messages are pass thru, however when using JBoss 2.2, only the second one "message2" is passed thru. Since these are strings, response.setRenderParmeter / request.getParameter are fine to use. however, if you need to pass an object other than a string, you have to use the (set|get)Attribute.
if i am wrong, what ActionRequest/ActionResponse and RenderRequest/RenderResponse methods do i use to pass object from the action to the render in a portlet?
Here is my test portlet code
--------------------------------------------------------------------------------
package test.web.portlet;
import java.io.*;
import javax.portlet.*;
public class TestRequestResponsePortlet extends GenericPortlet
{
public void processAction(ActionRequest request, ActionResponse response) throws PortletException, IOException
{
request.setAttribute("message", "message from TestRequestResponsePortlet.processAction using request.setAttribute");
response.setRenderParameter("message2", "message2 from TestRequestResponsePortlet.processAction using response.setRenderParameter");
}//end method processAction
public void doView(RenderRequest request, RenderResponse response)
{
String message = (String)request.getAttribute("message");
String message2 = request.getParameter("message2");
PortletURL actionUrl = response.createActionURL();
String url = actionUrl.toString();
response.setContentType("text/html");
try
{
PrintWriter out = response.getWriter();
out.println("message = " + message + "");
out.println("message2 = " + message2 + "");
out.println("click the link below to see the results");
out.println("<h3><a href=\"" + url + "\">testing request response</h3>");
out.close();
}//end try
catch (IOException e)
{
e.printStackTrace();
}//end catch e
}//end method doView
}//end class TestRequestResponsePortletPortlet
--------------------------------------------------------------------------------
1. Re: possible bug in portal 2.2Dan Christopherson Dec 26, 2005 8:26 PM (in response to Matthew Purdy)
You shouldn't count on being able to access attributes set in an actionRequest to be accessable from a renderRequest. The reason for this is that processAaction is only called once for N calls to render, and N-1 of those render calls will be in different HTTPRequests.
I'm not sure if the portlet spec specifically forbids the behavior you saw in JBoss portal 2.0, but it does mandate that you code for the behavior you saw in JBoss portal 2.2. To illustrate this, run your portlet in 2.0, then hit a url in another portlet on the same page - this time around, your render will be called without your processAction being called and you'll get your string out of the render parameter but not from the attribute.
2. Alternatives to the request.setAttribute? portlet session vaMatthew Purdy Dec 26, 2005 10:48 PM (in response to Matthew Purdy)
thanx for your replay... new to portlets... didnt think about that... that does make sense:-)
what is an aternative to the request.setAttribute? should i use the portlet session?
i.e.
1) set the portlet session in the action request.
2) use it in the doView/doEdit/doHelp
3) when done, delete it in the doView/doEdit/doHelp
3. Re: possible bug in portal 2.2Dan Christopherson Dec 27, 2005 10:03 AM (in response to Matthew Purdy)
Yeah, that would work.
One alternative is to 'marshall' it into render parameters, if it's a relatively simple object. I'm working on an annotations based portlet framework that has that feature. Waiting on signature from my bosses to open source it.
Another alternative is to refetch within the render, then use the portal and/or hibernate caching to optimize. That's easy to say, but there's a lot of thought required in analysing the cacheing requirements vs. data 'freshness'
4. Re: possible bug in portal 2.2Matthew Purdy Dec 27, 2005 3:57 PM (in response to Matthew Purdy)
how would you marshall it into a render parameter?
futhermore, i have been thinking and doing research:
i.e.
what if you have a portlet that read something a datasource in the action request.
if the connection to the datasource fails you would want to show it in the render (doView/doEdit/doHelp). you may want to pass along the Exception class, which then would be forwarded to the error.jsp page inside the rendering...
if(request.getAttribute("errorException") != null)
{
//pass the message to error.jsp
}
else
{
//do what you want to with the data from the datasource
}
read this from jsr 168: it seems like using the request.(get|set)Attribute is valid???
------------------------------------------------------------
25 PLT.11.1.3 Request Attributes
Request attributes are objects associated with a portlet during a single portlet request.
Request attributes may be set by the portlet or the portlet container to express information
that otherwise could not be expressed via the API. Request attributes can be used to share
information with a servlet or JSP being included via the PortletRequestDispatcher.
30 Attributes are set, obtained and removed using the following methods of the
PortletRequest interface:
? getAttribute
? getAttributeNames
? setAttribute
35 ? removeAttribute
Only one attribute value may be associated with an attribute name.
Attribute names beginning with the ?javax.portlet.? prefix are reserved for definition
by this specification. It is suggested that all attributes placed into the attribute set be
named in accordance with the reverse domain name convention suggested by the Java
40 Programming Language Specification 1 for package naming.
------------------------------------------------------------------------
5. Re: possible bug in portal 2.2Konstantin Novoselov Dec 27, 2005 5:31 PM (in response to Matthew Purdy)
read this from jsr 168: it seems like using the request.(get|set)Attribute is valid???
It is valid, but nobody gurantees that render methods would have access to the same instance of request as processAction. doGet use new request object each time you refresh the page or do something with other portlets on the page.
You may save your exceptions or other objects in the portlet session to achive desired result.
6. Re: possible bug in portal 2.2Matthew Purdy Dec 27, 2005 5:33 PM (in response to Matthew Purdy)
okay, thanx :-)
7. Re: possible bug in portal 2.2Dan Christopherson Dec 28, 2005 1:22 PM (in response to Matthew Purdy)
.
8. Re: possible bug in portal 2.2Dan Christopherson Dec 29, 2005 1:26 PM (in response to Matthew Purdy)
"danch" wrote:
.
Err, wanted to point out that all this would really do is force the portal container to manage the lifespan of the object in session, rather than the portlet. It'd also be overhead to marshal/demarshal
9. Re: possible bug in portal 2.2Matthew Purdy Dec 29, 2005 4:41 PM (in response to Matthew Purdy)
okay, thanx for the info :-)
|
https://developer.jboss.org/thread/122311
|
CC-MAIN-2018-39
|
refinedweb
| 1,224
| 60.61
|
REQUEST_KEY(2) Linux Key Management Calls REQUEST_KEY(2)
request_key - request a key from the kernel's key management facility
#include <keyutils.h> key_serial_t request_key(const char *type, const char *description, const char *callout_info, key_serial_t dest_keyring); Note: There is no glibc wrapper for this system call; see NOTES.: KEY_SPEC_THREAD_KEYRING This specifies the caller)).: • The requestor keyring (KEY_REQKEY_DEFL_REQUESTOR_KEYRING, since Linux 2.6.29). • The thread-specific keyring (KEY_REQKEY_DEFL_THREAD_KEYRING; see thread-keyring(7)). • The process-specific keyring (KEY_REQKEY_DEFL_PROCESS_KEYRING; see process-keyring(7)). • The session-specific keyring (KEY_REQKEY_DEFL_SESSION_KEYRING; see session-keyring(7)). • The session keyring for the process's user ID (KEY_REQKEY_DEFL_USER_SESSION_KEYRING; see user-session-keyring(7)). This keyring is expected to always exist. • The UID-specific keyring (KEY_REQKEY_DEFL_USER_KEYRING; see user-keyring(7)). This keyring is also expected to always exist.:..
On success, request_key() returns the serial number of the key it found or caused to be created. ('.').
This system call first appeared in Linux 2.6.10. The ability to instantiate keys upon request was added in Linux 2.6.13.
This system call is a nonstandard Linux extension.
Glibc does not provide a wrapper for this system call. A wrapper is provided in the libkeyutils library. (The accompanying package provides the <keyutils.h> header file.) When employing the wrapper in that library, link with -lkeyutils. '2dddaf50' /proc/keys 2dddaf50 I--Q--- 1 perm 3f010000 1000 1000 user mtk:key1: 12 For another example of the use of this program, see keyctl(2). Program source /*); }).
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2021-08-27 REQUEST_KEY(2)
Pages that refer to this page: keyctl(1), add_key(2), keyctl(2), syscalls(2), find_key_by_type_and_name(3),_read(3), keyctl_revoke(3), keyctl_search(3), keyctl_session_to_parent(3), keyctl_setperm(3), keyctl_set_reqkey_keyring(3), keyctl_set_timeout(3), keyctl_update(3), keyctl_watch_key(3), keyrings(7), keyutils(7), persistent-keyring(7), user-keyring(7), user-session-keyring(7)
|
https://michaelkerrisk.com/linux/man-pages/man2/request_key.2.html
|
CC-MAIN-2022-27
|
refinedweb
| 330
| 52.05
|
A Newspaper for COVID-19 — The CoronaTimes
It seems that the way that I consume information has changed a lot. I have become quite a news junkie recently. One thing, in particular, is that I have been reading quite a lot of international news to determine the stages of Covid-19 in my country.
To do this, I generally visit a lot of news media sites in various countries to read up on the news. This gave me an idea. Why not create an international news dashboard for Corona? And here it is.
This post is about how I created the news dashboard using Streamlit and data from NewsApi and European CDC.
TLDR; Link to the App here.
Getting The Data
The most important thing while creating this Dashboard was acquiring the data. I am using two data sources:
1. Data from the European Centre for Disease Prevention and Control.
The downloadable data file is updated daily and contains the latest available public data on COVID-19. Here is a snapshot of this data.
def get_data(date): os.system("rm cases.csv") url = "[]()" filename = wget.download(url,"cases.csv") casedata = pd.read_csv(filename, encoding='latin-1') return casedata
2. News API
The second source of data comes from the News API, which lets me access articles from leading news outlets from various countries for free. The only caveat is that I could only hit the API 500 times a day, and there is a result limit of 100 results for a particular query for free accounts.
I tried to get around those limit barriers by using streamlit caching(So I don’t hit the API a lot). I also tried to get news data from last month using multiple filters to get a lot of data.
from newsapi import NewsApiClient newsapi = NewsApiClient(api_key='aedb6aa9bebb4011a4eb5447019dd592')
The primary way the API works is by giving us access to 3 functions.
a) A function to get Recent News from a country:
json_data = newsapi.get_top_headlines(q=q,language='en', country='us') data = pd.DataFrame(json_data['articles']) data.head()
b) A function to get “Everything” related to a query from the country. You can see the descriptions of API parameters here:
json_data = newsapi.get_everything(q='corona', language='en', from_param=str(date.today() -timedelta(days=29)), to= str(date.today()), sources = 'usa-today', page_size=100, page = 1, sort_by='relevancy' ) data = pd.DataFrame(json_data['articles']) data.head()
c) A function to get a list of sources from a Country programmatically. We can then use these sources to pull data from the “everything” API
def get_sources(country): sources = newsapi.get_sources(country=country) sources = [x['id'] for x in sources['sources']] return sources sources = get_sources(country='us') print(sources[:5]) ------------------------------------------------------------------- ['abc-news', 'al-jazeera-english', 'ars-technica', 'associated-press', 'axios']
I used all the functions above to get data that refreshes at a particular cadence. You can see how I use these API functions in a loop to download the data by looking at my code at GitHub.
Creating the Dashboard
I wanted to have a few important information in the Dashboard that I was interested in. So I started by creating various widgets.
1. Current World Snapshot:
The first information was regarding the whole world situation. The Number of Cases and Deaths. The case and death curve in various countries? What are the fatality rates in various countries? Below is the current world situation on 28 Mar 2020.
Observations: We can see the deaths in Italy are still on the rise, while we are seeing the deaths shooting up in Spain, France, and the United States as well. The death rates in some countries are worrying with death rates of 10.56% in Italy and 8.7% in Iraq. I suspect that the death rate statistic of 2% in the starting days of CoronaVirus was misinformed if not wrong.
Technical Details — To create this part of the Dashboard, I used the ECDC data. I also used a lot of HTML hacks with Streamlit, where I used bootstrap widgets as well as custom HTML to get data in the way I wanted to display it. Here are a few of the hacks:
- Using Bootstrap Cards: You can use bootstrap or, in that case, any HTML element in Streamlit if you change the parameter unsafe_allow_html to True. Do note that I am also using python f string formatting here.
st.sidebar.markdown(f'''<div class="card text-white bg-info mb-3" style="width: 18rem"> <div class="card-body"> <h5 class="card-title">Total Cases</h5> <p class="card-text">{sum(casedata['cases']):,d}</p> </div> </div>''', unsafe_allow_html=True)
The above code is behind the Dashboard styled cards in the streamlit app sidebar.
- Changed the width of the streamlit main page:
Again, there was no parameter given by streamlit to do this, and I was finding the page width a little too small for my use case. Adding the above code at the start of the app solved the issue.
st.markdown( f""" <style> .reportview-container .main .block-container{{ max-width: 1000px; }} </style> """, unsafe_allow_html=True, )
2. Most Recent News from Country
The primary purpose of creating this Dashboard was to get news from various outlets from top media outlets in the country.
Observations: As here you can see, here we have the top recent news from the United Kingdom concerning cases in Ireland and Boris Johnson’s corona woes.
Technical Details: As said before, I am using the News API to get this data. And here is how I am using a mashup of HTML and markdown to display the news results.
def create_most_recent_markdown(df,width=700): if len(df)>0: # img url img_path = df['urlToImage'].iloc[0] if not img_path: images = [x for x in df.urlToImage.values if x is not None] if len(images)!=0: img_path = random.choice(images) else: <br> <br>"** for index, row in df.iterrows(): **markdown_str += f"[{row['title']}]({row['url']}) by {row['author']}<br> "** return markdown_str else: return ''
Few things to note here:
The image width cannot be set using markdown so using custom HTML
The usage of python f strings to create the article titles and URLs.
If no image is found, we are defaulting to a custom image.
3. News Sentiment
Another thing that has been bothering me in these trying times is so much negativity everywhere. I wanted to see the news covered from a positive angle if it could be in any way. So I did some simple sentiment analysis using the custom sentiment analyzer from Textblob to do this.
I found out sentiments by news outlets as well as some of the most positive and negative news related to Coronavirus in the past 30 days. (Past 30 days because I cannot go more back with the free API).
Observations: As you can see that one of the most positive news is Trump changing his coronavirus stance on March 17th, and I agree. The second positive report seems to be regarding some sort of solution to the problem. While the first Negative news is regarding Cardi B slamming celebrities for sowing confusion about the Coronavirus. I won’t comment on this :)
Technical Details: To get the sentiment scores of an article I used TextBlob. Getting the sentiment scores that range from -1 to 1 is as simple as using the below function. I used a concatenation of title and description to find the sentiment as the content from the News API was truncated.
def textblob_sentiment(title,description): blob = TextBlob(str(title)+" "+str(description)) return blob.sentiment.polarity
The main difficulty here was to have a two-column layout to give both positive and negative news. For that again, I had to use a mashup of HTML and markdown. I used the HTML table to do this. Also, note how I used markdown to convert markdown to HTML using Python f strings.
import markdown md = markdown.Markdown() positive_results_markdown = create_most_recent_markdown(positivedata,400) negative_results_markdown = create_most_recent_markdown(negativedata,400) html = f'''<table style="width:100%"> <tr> <th><center>Most Positive News</center></th> <th><center>Most Negative News</center></th> </tr> <tr> <td><center>**{md.convert(positive_results_markdown)}**</center></td> <td><center>**{md.convert(negative_results_markdown)}**</center></td> </tr> </table>''' #print md.convert("# sample heading text") st.markdown(html,unsafe_allow_html=True)
4. News Source WordCloud
A visualization dashboard that works with text is never really complete without a word cloud, so I thought of adding a word cloud to understand the word usage from a particular source.
Observations: We can see Vice news using words like “New” and “Tested” a lot of times. While Business Insider used “China” a lot.
Technical Details:Here is what I used to create this masked word cloud:
import cv2 def create_mask(): mask = np.array(Image.open("coronavirus.png")) im_gray = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY) _, mask = cv2.threshold(im_gray, thresh=20, maxval=255, type=cv2.THRESH_BINARY) mask = 255 - mask return mask mask = create_mask() def create_wc_by(source): data = fulldf[fulldf['source']==source] text = " ".join([x for x in data.content.values if x is not None]) stopwords = set(STOPWORDS) stopwords.add('chars') stopwords.add('coronavirus') stopwords.add('corona') stopwords.add('chars') wc = WordCloud(background_color="white", max_words=1000, mask=mask, stopwords=stopwords, max_font_size=90, random_state=42, contour_width=3, contour_color='steelblue') wc.generate(text) plt.figure(figsize=[30,30]) plt.imshow(wc, interpolation='bilinear') plt.axis("off") return plt st.pyplot(create_wc_by(source),use_container_width=True)
Other Technical Considerations
1. Advanced Caching:
In new streamlit release notes for 0.57.0 which just came out yesterday, streamlit has made updates to st.cache. One notable change to this release is the “ability to set expiration options for cached functions by setting the max_entries and ttl arguments”. From the documentation:
max_entries (int or None) — The maximum number of entries to keep in the cache, or None for an unbounded cache. (When a new entry is added to a full cache, the oldest cached entry will be removed.) The default is None.
ttl (float or None) — The maximum number of seconds to keep an entry in the cache, or None if cache entries should not expire. The default is None.
Two use cases where this might help would be:
If you’re serving your app and don’t want the cache to grow forever.
If you have a cached function that reads live data from a URL and should clear every few hours to fetch the latest data
So this is what is being used in a lot of functions to avoid hitting APIs multiple times and to prevent them from getting stale at the same time.
For Example, Top results from a country are fetched at a period of 360 seconds i.e., 6 minutes.
st.cache(ttl=360,max_entries=20) def create_dataframe_top(queries,country): #Hits API Here
While full results from the everything API are fetched at a period of one day.
[@st]().cache(ttl = 60*60*24,max_entries=20) def create_dataframe_last_30d(queries, sources): # hits API
2. Deployment:
I used the amazon free ec2 instance to deploy this app at. If you want to know the steps,read my post on How to Deploy a Streamlit App using an Amazon Free ec2 instance?
There are also a few caveats:
Since it is a free server, it might not take too much load.
I have not thoroughly tested the caching routine. I just hope that there are no memory errors with the limited memory on the server.
The News API is also free. There might be rate limits that might kick in even after I have tried to handle that.
3. Learning
For folks who are lost, you might like to start with the basics first. Here is my introductory posts on Streamlit and Plotly express.
- How to write Web apps using simple Python for Data Scientists?
- Python’s One Liner graph creation library with animations Hans Rosling Style
Conclusion
Here I have tried creating a dashboard for news on Coronavirus, but it is still in a nascent stage, and a lot needs to be done.
For one, it needs a large server. For another, a lot of time to improve the visualization and layouts. And also a lot of testing.
Also, we have done a few things in a roundabout way using HTML and few hacks. There are still a lot of things that I will love to have in Streamlit. I have been in talks with the Streamlit team over the new functionality that they are going to introduce, and I will try to keep you updated on the same. The good news is that Layout options are a part of the new functionality that Streamlit is working on.
You can find the full code for the final app here at my Github repo. And here is the full app on the web...
|
https://mlwhiz.com/blog/2020/03/29/coronatimes/
|
CC-MAIN-2020-40
|
refinedweb
| 2,130
| 65.42
|
By analogy with 1671880 [variable] should be modified to permit no arguments.
This enhancement would be useful for the following idioms:
allow_comments - 1
this is done.
Isn't this already done in the TIP #323 implementation?
Logged In: YES
user_id=148712
Originator: NO
This looks like a valuable RFE to me, I'd even extend it to [global], [upvar], [namespace upvar]. But it does require a simple TIP, as the docs (and tests) need changing.
Logged In: YES
user_id=80530
Originator: NO
Since [variable] takes alternating
arguments to represent varname and values,
I think the idiom is:
proc foo {dict} {
variable {*}$dict
}
Logged In: YES
user_id=19214
Originator: YES
proc foo {args} {
variable {*}$args
}
is the idiom I was thinking of.
Logged In: YES
user_id=148712
Originator: NO
messagw truncated: which idioms?
|
https://core.tcl-lang.org/tcl/tktview/1881284fffffffffffff
|
CC-MAIN-2021-25
|
refinedweb
| 132
| 61.67
|
Hi - I have game objects tagged MenuOption0, MenuOption1, MenuOption2, etc. I want to keep a currentIndex var, which gets incremented each time the user swipes on the controller. That swipe should increment the currentIndex var, and switch the reference to a new game object, whose text color will be changed.
This is what I'm trying as a test:
GameObject testText = GameObject.Find("MenuOption" + 4);
TextMeshProUGUI thisTextMesh = testText.GetComponent<TextMeshProUGUI>();
thisTextMesh.color = colorWhite;
How do I dynamically construct a reference to MenuOption1,2,3,4,etc? Or is there a better way to handle this without writing a big long conditional?
You don't want to just chuck your MenuOption GameObjects into a list or an array?
Answer by JVene
·
Jul 14, 2018 at 04:31 AM
@mheavers
It's the etc. that makes the setup such an unattractive proposition. When there's 8, or 20 or more, it gets ugly.
I assume a supervisory script owns the code you posted. I'll call it a class MenuSupervisor. Something like:
public class MenuItem : MonoBehaviour
{
void Awake() { MenuSupervisor.RegisterMenuItem( this ); }
}
public class MenuComp : IComparer< MenuItem >
{
public int Compare( MenuItem a, MenuItem b )
{
return System.String.Compare( a.gameObject.name, a.gameObject.name );
}
}
public class MenuSupervisor : MonoBehaviour
{
private static List< MenuItem > menuEntries;
public static void RegisterMenuItem( MenuItem m )
{
menuEntries.Add( m );
}
void Start()
{
menuEntries.Sort( new MenuComp() );
}
}
In this approach the objects you call MenuOptionX would need a script attached to each, which is MenuItem in the above example. This script uses Awake (it's exactly like Start, but all Awakes are completed before the Starts are executed).
MenuItems thus register themselves with the supervisor during the Awake phase (which is before the Start phase).
I assume you want the list in order, so I've added an IComparer example to sort the menu items by name, which you could customize for any sorting order you'd prefer.
The supervisor's Start function, which should be called after all the menu items have registered, sorts the list by name. Take care of your naming convention as MenuOption1 and MenuOption10 would precede MenuOption2 when sorted this way. You may do better with MenuOption01, 02, 03....
In any event, you now have a List which you can address as an array using an index, where each entry is a MenuItem, such that
List[ x ].gameObject.GetComponent<TextMeshProUGUI>().color = colorWhite
Would work for any x. Well, any x that's within List.
Of course, the MenuItem could use Start to cache the TextMeshProUGUI if you like.
Answer by Jehy
·
Jul 14, 2018 at 04:07 AM
@mheavers Perhaps I'm not understanding the totality of what you're trying to accomplish but I feel like the best way you approach this, due to the similar nature of the objects you're referencing, is with a collection: MenuOptions collection: Order the MenuOptions into a container (list, array, etc) on startup. Then whenever your input logic needs to use a new MenuOption, you can just use the reference from your master list. That should let you grab the reference you need by the index at runtime.
Alternatively: If searching by name is required, and you know the count of MenuOptions, you could similarly use a container and populate it at runtime with a loop. That would eliminate the need to frequently call GameObject.Find();
Yikes, I was typing out something similar while you66 People are following this question.
Creating a GameObject variable without instantiating it?
1
Answer
How to detect an object which be in FOV of certain camera ?
1
Answer
Distribute terrain in zones
3
Answers
Access water4 scripts via my script?
1
Answer
how to change gameobject color in hierarchy for highlight purposes.
2
Answers
|
https://answers.unity.com/questions/1529072/how-to-dynamically-construct-a-reference-to-a-game.html
|
CC-MAIN-2019-39
|
refinedweb
| 620
| 55.95
|
By Nicholas Ballard
Compile VK
Source: towards Data Science
It can be said that every “computer user” needs to adjust the size of the image at a certain point in time. The preview version of Mac OS can do this, and so can windows PowerToys.
This article uses Python to resize images. Fortunately, image processing and command-line tools are two of Python’s strengths.
This article aims to show you three things:
Basic concept of image.
Python library for manipulating images.
You can use the code in this article in your own projects.
The command line program we want to build can resize one or more image files at a time.
Create image
In this example, we will create our own image instead of finding a real image to manipulate.
Why? In fact, creating images is a good way to explain what an image is actually. This resizing program also works on instagram.
So, what is an image? In Python data terminology, an image is a list of int tuples.
image = list[list[tuple[*int, float]]]
Numpy is defined as a two-dimensional shape array (h, W, 4), where h represents the number of high pixels (up and down) and W represents the number of wide pixels (from left to right).
In other words, an image is a list of pixels (rows) (the entire image). Each pixel consists of 3 integers and 1 optional floating-point number: red channel, green channel, blue channel and alpha (floating-point optional). The red, green, and blue channels (RGB) have values from 0 to 255.
From now on, we will discuss color images without alpha channels to keep it simple. Alpha is the transparency of the pixel. Images can also have only one channel with a value from 0 to 255. This is a grayscale image, that is, a black-and-white image. Here we use color images!
import matplotlib as plt pixel: tuple = (200, 100, 150) plt.imshow([[list(pixel)]])
Making images in pure Python
Python is fully capable of creating images. To display it, I’ll use the Matplotlib library, which you can use to install:
pip install matplotlib
Create pixels:
from dataclasses import dataclass @dataclass class Pixel: red: int green: int blue: int # alpha: float = 1 pixel = Pixel(255,0,0) pixel # returns: # Pixel(red=255, green=0, blue=0, alpha=1)
Create image:
from __future__ import annotations from dataclasses import dataclass, astuple from itertools import cycle from typing import List import matplotlib.pyplot as plt import matplotlib.image as mpimg @dataclass class Pixel: red: int green: int blue: int # alpha: float = 1 pixel = Pixel(255,0,0) pixel marigold: Pixel = Pixel(234,162,33) red: Pixel = Pixel(255,0,0) Image = List[List[Pixel]] def create_image(*colors: Pixel, blocksize: int = 10, squaresize: int = 9) -> Image: "" "make a square image (the same width and height) with configurable pixel blocks." Args: Colors (pixel): parameters of iterative color rendering order. Blocksize (int, optional): [description]. Default: 10 Squaresize (int, optional): [description]. Default 9 Returns: Image: a beautiful square picture! """ img: list = [] colors = cycle(colors) for row in range(squaresize): row: list = [] for col in range(squaresize): Color = next (colors) # set color for _ in range(blocksize): values: list[int] = list(astuple(color)) row.append(values) [img. Append (row) for in range (squaresize)] # create row height return img if __name__ == '__main__': image = create_image(marigold, red) plt.imshow(image)
This is the rendered image. Behind this, the data is as follows:
[[[234, 162, 33], [234, 162, 33], [234, 162, 33], [234, 162, 33], [234, 162, 33], [234, 162, 33], [234, 162, 33], [234, 162, 33], [234, 162, 33], [234, 162, 33], [255, 0, 0], [255, 0, 0], [255, 0, 0], [255, 0, 0], [255, 0, 0], [255, 0, 0], [255, 0, 0], [255, 0, 0], [255, 0, 0], [255, 0, 0], [234, 162, 33], ...
Now that we have an image, let’s resize it!
Adjust the size in Python
Writing an image resizing algorithm in Python actually has a lot of work.
There are many contents in image processing algorithms, and some people have contributed a lot to this. For example, resampling – a pixel is used in the reduced image to represent the surrounding high-resolution pixels. Image processing is a huge topic. If you want to see it for yourself, take a look at the image.py of pilot, which is in the path path / to / site packages / PIL.
There are also some optimizations, such as anti aliasing and reducing gap… There is a lot of content here. We stand on the shoulders of giants and can solve our problems with one line of code.
If you are interested in learning more about what happens behind the scenes when processing images, I encourage you to check out the topic of “machine vision” more! This is definitely a booming field.
If you do well enough, many companies will be willing to pay the highest price for your computer vision expertise. Autopilot, IOT, surveillance, you name it; All basically depend on processing pictures (usually in Python or C + +).
A good starting point is to view the scikit image.
OpenCV
Opencv can be used for image processing. He wrote it in C + + and ported it to python
import cv2: np.ndarray = cv2.imread(fp) width, height, channels = im.shape new_width: int = _scale(width, scale) new_height: int = _scale(height, scale) new_dim: tuple = (new_width, new_height) return cv2.resize(src=im, dsize=new_dim, interpolation=cv2.INTER_LINEAR)
The options for the interpolation parameter are one of the flags provided in the CV2 package:
INTER_ Nearest – nearest neighbor interpolation INTER_ Linear – bilinear interpolation (default) INTER_ Area – resampling using pixel area relationships. It may be the preferred method of image extraction. But when the image is scaled, it is similar to Inter_ Nearest method. INTER_ Cubic – one greater than 4 × Bicubic interpolation of 4-pixel neighborhood INTER_ Lanczos4 – one greater than 8 × Lanczos interpolation in 8-pixel neighborhood
After return:
resized = resize("checkers.jpg", 50) print(resized.shape) Plt.imshow (resized) # or cv2.imshow ("name", image)
It did what we expected. Images range from 900 pixels high and 900 pixels wide to 450 × 450 (there are still three color channels). The screenshot above doesn’t look very good because of the mattlotlib coloring of jupyter lab.
Pillow
The pilot library has a resizing method on the image class. Its parameters are:
size: (width, height) Resample: the default is the parameter required by bicubic. Resampling algorithm. Box: the default is none. For a 4-tuple, the image rectangle working within the parameters (0,0, width, height) is defined. reducing_ Gap: the default is none. Resampling optimization algorithm to make the output look better.
Here are the functions:
from PIL import Image = Image.open(fp) width, height = im.size new_width: int = _scale(width, scale) new_height: int = _scale(height, scale) new_dim: tuple = (new_width, new_height) return im.resize(new_dim)
The function using pilot is very similar to OpenCV. The only difference is that the pil.image.image class has an attribute size for accessing the image (width, height).
The result is:
resized = resize("checkers.jpg", 30.5) print(resized.size) resized.show("resized image", resized)
Notice how the show method opens the default program of the operating system to view the file type of the image.
Create command line program
Now that we have an image resizing function, it’s time to have a user interface to run resizing.
It is possible to resize an image. But we want to be able to process images in batches.
The interface we’re going to build will be the simplest: the command line utility.
Pallets project is the genius community behind flash. It is a Jinja template engine: click(。)
pip install click
Click is a library for making command line programs. This is better than using normal argparse or
if __name__ == '__main__':It’s much better to start some if then logic in. Therefore, we will use click to decorate our image adjuster.
The following is a complete script for resizing images from the command line!
""" resize.py """ from __future__ import annotations import os import glob from pathlib import Path import sys import click from PIL import Image """ file: """ SUPPORTED_FILE_TYPES: list[str] = [".jpg", ".png"] def name_file(fp: Path, suffix) -> str: return f"{fp.stem}{suffix}{fp.suffix}" def resize(fp: str, scale: Union[float, int]) -> Image: "" "resize the image to maintain its scale" Args: FP (STR): path parameter of image file Scale (Union [float, int]): percentage as a parameter. For example: 53 Returns: Image (NP. Ndarray): a scaled down picture """ _scale = lambda dim, s: int(dim * s / 100) im: PIL.Image.Image = Image.open(fp) width, height = im.size new_width: int = _scale(width, scale) new_height: int = _scale(height, scale) new_dim: tuple = (new_width, new_height) return im.resize(new_dim) @click.command() @click.option("-p", "--pattern") @click.option("-s", "--scale", default=50, help="Percent as whole number to scale. eg. 40") @click.option("-q", "--quiet", default=False, is_flag=True, help="Suppresses stdout.") def main(pattern: str, scale: int, quiet: bool): for image in (images := Path().glob(pattern)): if image.suffix not in SUPPORTED_FILE_TYPES: continue im = resize(image, scale) nw, nh = im.size suffix: str = f"_{scale}_{nw}x{nh}" resize_name: str = name_file(image, suffix) _dir: Path = image.absolute().parent im.save(_dir / resize_name) if not quiet: print( f"resized image saved to {resize_name}.") if images == []: print(f"No images found at search pattern '{pattern}'.") return if __name__ == '__main__': main()
The command line program runs from the entry point function main. Parameters are passed to the click.option option:
patternUse string form to locate one or more images related to the directory in which the script runs.
--pattern="../catpics/*.pngLooks up the catpics folder one level and returns all the files in the folder with the. PNG image extension.
scaleAccepts a number, floating point, or integer and passes it to the resize function. This script is very simple without data validation. If you add it to the code, the check scale is a number between 5 and 99 (reasonable reduction scale parameter). You can pass
-s "chicken nuggets"Make settings.
- If you do not want to output text to the standard stream while the program is running, then
quietIs an option parameter.
Run the program from the command line:
python resize.py -s 35 -p "./*jpg"
result:
$ py resize.py -p "checkers.jpg" -s 90 resized image saved to checkers_90_810x810.jpg.
Checking folder:
$ ls -lh checkers* -rw-r--r-- 1 nicho 197609 362K Aug 15 13:13 checkers.jpg -rw-r--r-- 1 nicho 197609 231K Aug 15 23:56 checkers_90_810x810.jpg
not bad So the program shrinks the image and gives it a descriptive label. We can see that the file size ranges from 362KB to 231kb!
In order to see that the program processes multiple files at the same time, we will run it again:
$ py resize.py --pattern="checkers*" --scale=20 resized image saved to checkers_20_180x180.jpg. resized image saved to checkers_90_810x810_20_162x162.jpg.
File system output:
$ ll -h checkers* -rw-r--r-- 1 nicho 197609 362K Aug 15 13:13 checkers.jpg -rw-r--r-- 1 nicho 197609 1.8K Aug 16 00:23 checkers_20_180x180.jpg -rw-r--r-- 1 nicho 197609 231K Aug 15 23:56 checkers_90_810x810.jpg -rw-r--r-- 1 nicho 197609 1.8K Aug 16 00:23 checkers_90_810x810_20_162x162.jpg
As long as the pattern is matched, recursion can process any number of images.
Click
ClickIs a magical tool. It can wrap a function and extract it from a module in a “normal way”
if __name__ == '__main__'Statement run. (in fact, it doesn’t even need to do this; you just need to define and decorate the functions to run), but its real highlight is to install the script as a package.
This comes with Python
setuptoolsLibrary completed.
This is mine
setup.py.
from setuptools import setup setup( name='resize', version='0.0.1', py_modules=['resize'], install_requires=[ 'click', 'pillow', ], entry_points=''' [console_scripts] resize=resize:main ''' )
Generate an executable / wrapper using the following command:
pip install -e .
Now you can call the script without using Python commands. In addition, if you add a new executable to a folder in the path, you can call this program from anywhere on your computer, such as
resize -p *jpg -s 75
conclusion
This tutorial has done a lot of research:
- Firstly, some third-party Python libraries for image processing are introduced.
- Then build an image from scratch using Python to further understand the actual meaning of the image.
- Then, select one of the options and build a script to reduce the image while maintaining the image scale.
- Finally, put all this in a command-line utility and accept configurable options through click.
Keep in mind that writing code can take hours or days. But it only takes a few milliseconds to run. The program you make doesn’t have to be big. Anything that can save you time or make you produce more output may serve the rest of your life!
resources
- click()
- matplotlib()
- opencv()
- pillow()
- scikit-image()
Original link:
Welcome to panchuang AI blog:
Official Chinese document of sklearn machine learning:
Welcome to panchuang blog resources summary station:
|
https://developpaper.com/resizing-images-using-python/
|
CC-MAIN-2021-43
|
refinedweb
| 2,189
| 66.33
|
On Thursday 16 August 2001 23:37, David Eisenberg wrote:
> I am attempting to read an XML file, use Xalan to convert it to
> SVG, and then use Batik's transcoder to create a JPG file to send
> to a user (all this from a servlet). I get a transcoder failure,
> as shown below.
Could you send us more details of what the transformer methods are doing?
I guess that some attributes might be missing such width/height or that your
svg elements might not be created in the SVG namespace.
Thierry.
---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org
|
http://mail-archives.apache.org/mod_mbox/xmlgraphics-batik-users/200108.mbox/%3C01082115512907.24070@quokka%3E
|
CC-MAIN-2014-10
|
refinedweb
| 113
| 71.85
|
mysql connector/j807599 Mar 6, 2007 7:56 AM
okay im using netbeans, the newest versions of mysql, connector/j, jdk, everything right. and this file compiles fine, but when i run it i get an error...okay im using netbeans, the newest versions of mysql, connector/j, jdk, everything right. and this file compiles fine, but when i run it i get an error...
package mysqlTest; import java.sql.*; public class NewClass{ public static void main(String[] args){ try{ Class.forName("com.mysql.jdbc.Driver").newInstance( ); Connection conn = DriverManager.getConnection( "jdbc:mysql://localhost:3306/test", "root","****"); ResultSet cursor = conn.createStatement( ).executeQuery( "INSERT into javatest VALUES('Steve', 'Granet')"); conn.close( ); } catch (Exception e){ System.err.println(e); } } }
init:
deps-module-jar:
deps-ear-jar:
deps-jar:
compile-single:
run-main:
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
BUILD SUCCESSFUL (total time: 1 second)
now i added the connection to netbeans, all worked fine right, i can connect to my database, but when i do it through my own code it doesnt work.
This content has been marked as final. Show 13 replies
1. Re: mysql connector/j807599 Mar 6, 2007 8:00 AM (in response to 807599)
Connection conn = DriverManager.getConnection(add '?' at the end of url string, so your url become like this
"jdbc:mysql://localhost:3306/test",
Message was edited by:
"jdbc:mysql://localhost:3306/test?"
p_epi
2. Re: mysql connector/jmasijade Mar 6, 2007 8:15 AM (in response to 807599)Classpath, classpath, classpath.
Since you are using netbeans, ensure that the driver jarfile is included as one of the libraries for the project.
Edit: or if through your own code means from the command line, then ensure that the jarfile is listed under the -cp option.
3. Re: mysql connector/j807599 Mar 6, 2007 8:15 AM (in response to 807599)no that didnt fix it, where am i supposed to put the connector/j driver at? what specific folder for windows? i mean i just have it on my desktop, but i thought netbeans setup the classpath to read it... but maybe it didnt, and i dont want a classpath going to a random folder on my desktop.
but then why could i access the db through the 'runetime' tab in netbeans?
4. Re: mysql connector/jmasijade Mar 6, 2007 8:17 AM (in response to 807599)You need to add it to the project as a library. It doesn't matter where you have it, as long as it is configured in the project.
5. Re: mysql connector/j807599 Mar 6, 2007 8:17 AM (in response to 807599)well its nice that ya'll are helping me out, but how do i add it as a library?
6. Re: mysql connector/j807599 Mar 6, 2007 8:22 AM (in response to 807599)okay im an idiot, i figured out how to add the libary, and set the classpath for the library, but it still didnt fix the error, i recompiled and ran again, and i still get the same error.
7. Re: mysql connector/j807599 Mar 6, 2007 8:25 AM (in response to 807599)okay, after more tinkering i got it right, but now i get this huge ass long error...
init: deps-module-jar: deps-ear-jar: deps-jar: compile-single: run-main: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception: ** BEGIN NESTED EXCEPTION ** java.net.ConnectException MESSAGE: Connection refused: connect STACKTRACE::208) at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:173) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:267) at com.mysql.jdbc.Connection.createNewIO(Connection.java:2739) at com.mysql.jdbc.Connection.<init>(Connection.java:1553) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:266) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at mysqlTest.NewClass.main(NewClass.java:10) ** END NESTED EXCEPTION ** Last packet sent to the server was 16 ms ago. BUILD SUCCESSFUL (total time: 3 seconds)
8. Re: mysql connector/jmasijade Mar 6, 2007 8:28 AM (in response to 807599)Stupid question, but did you remove the "?" from the URL that p_epi suggested you add. If you did not remove it, then remove it now. That is only needed if you are passing the user and password as a part of the URL, which you are not.
9. Re: mysql connector/j807599 Mar 6, 2007 8:29 AM (in response to 807599)yeah i just removed it and tried again, same error
10. Re: mysql connector/jmasijade Mar 6, 2007 8:43 AM (in response to 807599)A few more stupid questions, but they need to be asked.
1. Is the DB currently running?
2. Is it using port 3306 (I assume it is since that is the default)?
3. In the configuration (the mysql.users table) is there an entry "root@localhost" or "root@%" or "root@<localip/hostname>"?
If the entry is "root@<localip/hostname>" exists, then change the connection url from "localhost" to the actual ip.
If none of the above is present, then add the "root@localhost".
If the above suggestion does not work, and the "root@<localip/hostname>" entry does not exist, add it and follow the first suggesstion.
The last suggesstion above is because I am not sure if the "root@localhost" option only allows access through the "sock" file, or if it also allows TCP access.
11. Re: mysql connector/j800308 Mar 6, 2007 10:37 AM (in response to 807599)
but when i do it through my own code... ummm... could you post your code?
keith.
12. Re: mysql connector/jmasijade Mar 6, 2007 11:00 AM (in response to 800308)He says in his follow up thread
that he has it working, but does not say what change he had to make.
It all had to do with classpath and then (probably) connection parameters (username, password, mysql config).
13. Re: mysql connector/j807599 Mar 6, 2007 11:09 AM (in response to 807599)you need to have the jar file in your classpath which has com.mysql.jdbc package and a Driver class within tht package
|
https://community.oracle.com/message/5047723
|
CC-MAIN-2016-40
|
refinedweb
| 1,021
| 57.16
|
Find missing elements of a range in C++
In this tutorial, we will learn how to find missing elements of a range in C++. Here, we will be given an array of elements and two numbers. We will have to find all the missing numbers in that range whether present in the array or not.
C++ Program to find all the missing elements in an array for a given range
According to the given question, let’s take an example:
Case 1:
Array = {20,26,18,23,1,45,32,24,33,50,55} Range = 21 - 35 Missing = 21,22,25,27,28,29,31,34,35
Hence, we should write our program in such a way that it deals with the above-stated example where the lower bound is greater than the lowest element in the array and the higher bound is lower than the highest element in the array.
Case 2:
Array = {20,26,18,23,1,45,32,24,33,50,55} Range = 11 - 35 Missing = 11,12,13,14,15,16,17,19,21,22,25,27,28,29,31,34,35
Hence, our algorithm should also deal with the case where the lower bound is less than the lowest element in the array and the higher bound is lower than the highest element in the array.
Case 3:
Array = {20,26,18,23,1,45,32,24,33,40,38,42,41} Range = 21 - 48 Missing = 21 22 25 27 28 29 30 31 34 35 36 37 39 43 44 46 47 48
Hence, the algorithm should also deal with the case in which lower bound is greater than the lowest element but the higher bound is also greater than the highest element in the array.
Our approach to this problem is very simple, we will first sort the algorithm using the sort method present in STL in C++. Then we will search for lower bound in our algorithm using the binary search algorithm. Then we start traversing our array from that element, if the next number is present in the array we will increment the number, if not present we will print it out and the loop will run till either we are out of bounds of the array or we reach higher bound. So this is the simplest approach to solve the given program. Now let us code for the above-explained problem and procedure.
Code for the algorithm:
#include <bits/stdc++.h> using namespace std; // Print all elements of range [low, high] that // are not present in arr[0..n-1] void printMissingItems(int array[], int n, int low, int high) { // Sort the array sort(array, array + n); // finding lower bound int* ptr = lower_bound(array, array + n, low); int lowerIndex = ptr - array; //start from lower index and traverse int i = lowerIndex, x = low ; while (i < n && x <= high) { // If x doesn't match print if (array[i] != x) cout << x << " "; // If x matches, move to next element in array else i++; // Move to next element in range [low, high] x++; } // Print range elements thar are greater than the // last element of sorted array. while (x <= high) cout << x++ << " "; } // Main program int main() { int array[] ={20,26,18,23,1,45,32,24,33,50,55}; int n = sizeof(array) / sizeof(array[0]); int low = 21, high = 35; count<<"MISSING NUMBERS ARE"<<endl; printMissingItems(array, n, low, high); return 0; }
Output:
MISSING NUMBERS ARE 21 22 25 27 28 29 30 31 34 35
Hence, we have found the solution to the given problem. I hope this tutorial will clear all your doubts.
Thank You.
|
https://www.codespeedy.com/find-missing-elements-of-a-range-in-c/
|
CC-MAIN-2021-10
|
refinedweb
| 603
| 57.44
|
by Rahul Biswas
In this tip, you'll learn how to prevent non-repeatable reads through the Java Persistence API (JPA) and JPA extensions provided by EclipseLink.
Non-Repeatable Reads.
Non-repeatable reads apply to the following scenario: A transaction, T1, reads a row in a database. Another transaction, T2, then modifies or deletes that row before T1 has committed. Both transactions eventually commit successfully.
Generally one of the following approaches is used to prevent that scenario:
Consider, for example, the following simple table in a database:
Suppose an application, A, starts a transaction, T1, and queries the table to retrieve the price for "Expensive Item". Suppose A then starts another transaction, T2, to update the price for "Expensive Item".
Figure 1 illustrates the optimistic approach. Transaction T1 assumes that data is not being concurrently modified, although, in fact, transaction T2 does concurrently modify the data and proceeds with the commit. When T1 attempts to commit, it detects the change and notifies the application, which may rollback the T1 transaction.
Figure 2 illustrates the pessimistic approach. Here, the T2 update is blocked until the T1 transaction commits.
Preventing Non-Repeatable Reads in JPA
You can use JPA to prevent non-repeatable reads on versioned entities.
A versioned entity is marked with the
@Version annotation, as illustrated
in the following code snippet:
@Entity public class StockQuote implements Serializable { @Version public Long getVersion() { return version; }
and its corresponding database schema has a version column, such as that created by the following SQL statement:
CREATE TABLE STOCKQUOTE (ID NUMBER NOT NULL, VERSION NUMBER, PRICE FLOAT, DESCRIPTION VARCHAR(255), PRIMARY KEY (ID));
Versioning enables JPA to manage optimistic locking. Optimistic locking assumes that there will be infrequent conflicts between concurrent transactions. In optimistic locking, the objective is to give concurrent transactions a lot of freedom to process simultaneously, but to detect and prevent collisions.
The way you prevent non-repeatable reads in JPA on a versioned entity is through the
lock() method of
the
EntityManager class. Here is the method signature:
public void lock(Object entity, LockModeType lockMode);
The first method parameter is the entity instance that needs to be locked in the transaction.
The second method parameter is the lock mode, which can have one of the following values:
Both lock modes prevent non-repeatable reads. However, the WRITE lock mode also forces the version column to be updated.
As illustrated in Figure 1, transaction T1 acquires a read lock, but
transaction T2 is allowed to commit its changes. When transaction T1 tries to commit, the EclipseLink JPA implementation
detects that the data changed since the last read. It does this by checking the version column.
It then throws an
OptimisticLockException to
application A. At this point, the application can retry the operation after refreshing
the value of the entity. Alternatively, the application can abort the operation and rollback the transaction.
Taking Advantage of EclipseLink Extensions
EclipseLink is an open source project whose goal is to provide a comprehensive persistence framework that will run in any Java environment and that will support the reading and writing of objects to and from virtually any type of data source. One of the project's deliverables is an advanced features extension to JPA. You can take advantage of this extension to ensure repeatable reads through pessimitic locking, something that is not currently supported in the JPA 1.0 specification. However, this solution is not portable because it uses EclipseLink-specific extensions.The way to ensure repeatable reads with EclipseLink is through its support for pessimistic locking on JPA Query Language (JPA QL) queries. Pessimistic locking assumes that there will be frequent conflicts between concurrent transactions. To prevent collisions during pessimistic locking, an entity is locked in the database for the entire time that it is in application memory.
EclipseLink enables pessimistic locks on JPA QL queries through query hints, which are JPA extension points for vendor-specific query features.
There are two ways to enable a pessimistic lock through a JPA QL query hint. One way is to use
a
@NamedQuery annotation, as in the following
example:
@NamedQuery( name="GetStock" query="select sq from StockQuote as sq where sq.id = :id" hints={@QueryHint(name=EclipseLinkQueryHints.PESSIMISTIC_LOCK, value=PessimisticLock.Lock)})
The other way is to use the Query API, as in the following example:
Query q = em.createNamedQuery("GetStock"); q.setHint(EclipseLinkQueryHints.PESSIMISTIC_LOCK, PessimisticLock.Lock); q.setParameter("id", 1);
Acquiring a pessimistic lock through either of these techniques typically locks the pertinent row in the underlying database. As illustrated in Figure 2, if transaction T1 runs either of the two queries above, it locks the underlying datasource row and blocks T2 from committing its updates. After T1 commits, T2 can commit its changes.
Sample Application
Let's look at a sample application that prevents non-repeatable reads through EclipseLink JPA extensions. In fact, the application also allows non-repeatable reads so that you can compare the results of both types of reads. You can find the application in the sample package that accompanies this tip.
The sample application is a simplified version of a stock market application that would typically handle a large number of concurrent transactions, and in doing so, service simultaneous reads and writes to entities. In general, here's what the application does:
createDDL.ddbc):
persistence.xmlfile in the META-INF directory of an application. Here is the content of the
persistence.xmlfile for the sample application:
ReaderThreadand a
WriterThreadthread. The application can run in one of three modes, repeatable read with optimistic locking, repeatable read with pessimistic locking or non-repeatable read. You indicate which mode by specifying
rrand
o, for repeatable read with optimistic locking,
rrand
p, for repeatable read with pessimistic locking, or
nrr, for non-repeatable read, when you start the application. The application uses the entries (or entry) as arguments when it starts the
ReaderThread. The code for this part of the application is in file
Main.java. Here is a snippet of that code:
In repeatable read with optimistic locking mode, the
ReaderThread begins a transaction
on an entity. The
WriterThread then updates the stock price during the
ReaderThread
transaction and successfully commits. The
ReaderThread attempts to commit its transaction.
However, the underlying JPA implementation detects that the corresponding database row has been updated by
another thread after it was last read, so it throws an
OptimisticLockException. The application
catches this exception and flags the transaction as failed.
In repeatable read with pessimistic locking mode, the
ReaderThread begins a transaction
on an entity. The
WriterThread then attempts to update the stock price during the
ReaderThread transaction. However, the update attempt is blocked because the
ReaderThread
has a lock on both the entity object and the underlying database row representing the object.
In non-repeatable read mode, the
ReaderThread begins a transaction on an entity. Then the
WriterThread updates the stock price on the same entity and commits its transaction before the
ReaderThread commits its transaction. The
ReaderThread detects the
conflict by detecting the difference in the price and prints out a warning message.
Here is part of the code in the
ReaderThread:
ReaderThreadenables repeatable reads through pessimistic locking in a JPA QL query hint:
Running the Sample Application
A sample package accompanies this tip. To install and run the application in the sample package:
<sample_install_dir>/jpa-repeatable-read.tech-tip, where
<sample_install_dir>is the directory where you installed the sample package. For example, if you extracted the contents to
C:\on a Windows machine, then your newly created directory should be at
C:\jpa-repeatable-read.tech-tip.
jpa-repeatable-read.tech-tipdirectory and set the values for following properties in the
build.xmlfile as appropriate for your operating environment:
jdbc.url,
db.userid,
db.password, and
javaee.home. The application uses these property settings to update the
persistence.xmlfile and in the initial setup on the database.
jpa-repeatable-read.tech-tipdirectory:
ant setup
build.xmlfile have the values
rrand
oas follows:
jpa-repeatable-read.tech-tipdirectory:
ant run allYou should see output similar to the following:
WriterThreadand throws an
OptimisticLockException. Then the
ReaderThreadcatches the exception and flags the transaction as failed.
build.xmlfile have the values
rrand
pas follows:
<target name="run" depends="clean, pusetup, compile"> <java classname="com.sun.techtip.sample.Main" fork="true"> <arg value="rr"/> <arg value="p"/>
jpa-repeatable-read.tech-tipdirectory:
ant run allYou should see output similar to the following:
WriterThreadis blocked from committing the transaction until the
ReaderThreadtransaction has committed.
build.xmlfile to the value
nrr. Then enter the following command in the
jpa-repeatable-read.tech-tipdirectory:
ant run allYou should see output similar to the following:
ReaderThreadretrieves a value of 49.30 the first time it requests the price. The
WriterThreadis allowed to update the value to 54.23 concurrent with the
ReaderThreadtransaction. The second
ReaderThreadread retrieves the updated price because it is made after the
WriterThreadcommits its transaction.
Note: Although the application should run with various types of databases, it has been tested only with an Oracle database..
Further Reading
About the Author
Rahul Biswas is a member of the Java Performance Engineering group at Sun.
This is the 1000th approach to locking what...ah, databases that already support locking. Get rid of database specific functions. Why not extent JPA with a LOCK column like it's provided VERSION column, so you can have both, optimistic and pessimistic locking on row level. (I recently ran accross such a sollution, easy to handle even on databases without any sort of locking.)
Posted by Onkobu on July 02, 2008 at 11:49 PM PDT #
Hi Onkobu,
Thanks for your comments. The current JPA 1.0 spec, on which this tech-tip is based, considers pessimistic locking out of the scope of the specification. So, it can only be done in vendor specific manners (which in this case is through EclipseLink specific query hints and relies on the underlying database to do row level locking).
If I understand your LOCK column suggestion, I am not sure how sturdy that approach would be. There are some considerations, such as what will happen if an application that got the lock by updating the lock column crashed?
There is some discussion on having a standard way of doing pessimistic locking in JPA in the 2.0 version of the specification.
You can access the early draft here (JSR 317):
And I would also encourage you to post your comments on pessimistic locking to this email list:
jsr-317-comments@jcp.org
Posted by Rahul on July 07, 2008 at 04:03 PM PDT #
I had a look at the JSR and I'm convinced, that handing over control to the persistence manager is sufficient. And I still can't believe, that pessimistic locking requires nothing more than an annotation while optimistic locking forces me to add a version field...what are annotations for? And as a short question-like answer to your "what if" scenario: What if not only the application freezes, but your database host tunnels to another dimension? (Oracle has tried to provide pessimistic locking for decades and there's always a scenario deadlocking even database processes. So flagging a row as read only and propagating this to all dependend records causing a red outline on user's interface is better than showing the hour glass curser...for weeks.) To conclude this: I add a boolean attribute lock to each leaf class. Every depending class uses the and-combined lock-attributes of it's dependencies and that's it. Now the session beans gets something like a select for update method (viewRecord and viewRecordAndLock) and there you go...no magic, no plugins and a simple check box on admin's interface to delete (unwanted) locks. Concurrency is handled by EJB spec.
Posted by Onkobu on July 21, 2008 at 01:02 AM PDT #
Hi Onkobu,
I think the question boils down to who do you want to be responsible for managing the locks, the JPA implementation running within your brand new application or a tried and tested database which is supposed to be 24x7. I am not questioning the possibility, but the reliability of this approach. Having said that, you have some interesting comments and I would again encourage you to bring this up in the jsr-317-comments@jcp.org mailing list. Folks there might have discussed something similar and provide better insights into the reasons behind the choice in JPA 2.0, or they may find your comments enlightening.
Thanks
Rahul
Posted by 192.18.43.225 on July 21, 2008 at 04:16 PM PDT #
Repeatable read is already provided by default by EclipseLink and Hibernate, because after an object is read it will not be refreshed from database for the duration of the transaction. However, same query can find new object during the life of the transaction, so phantom read.
It's a useful article but I think it's not using correctly the term repeatable read.
Title could have been: How to implement optimistic locking in JPA EclipseLink.
Posted by Sebastien Tardif on March 21, 2009 at 01:42 PM PDT #
Hi Sebastien,
Thanks for your comment. The tech-tip explains how to prevent non-repeatable reads and the example considers a scenario where the data has changed in the database since it was read.
While in JPA implementations you will by default have repeatable reads because an EntityManager find does not go to the database, you could be working with stale data because it may have changed on the database by another transaction. Unless you do an EntityManager lock (which is demonstrated in the example) or the transaction writes the same entity to the database, your application may not realize that it was working with stale data.
Optimistic or Pessimistic Locking are the mechanisms for ensuring that we can prevent the non-repeatable reads.
Rahul
Posted by Rahul on March 22, 2009 at 06:39 PM PDT #
|
http://blogs.sun.com/enterprisetechtips/entry/preventing_non_repeatable_reads_in
|
crawl-002
|
refinedweb
| 2,323
| 54.42
|
I was wondering if anyone had any experience with cross-database layers (e.g. MDB2) which had support for schemas. By "Schemas" I don't mean "the structure of a table", but rather the term as used in PostgreSQL, where within one database you can have alice.messages and bob.messages as two seperate tables both individually named "messages" but in "alice" and "bob" schemas.
I'm trying to see if there's a good abstraction layer I can use which somewhat provides this kind of namespace functionality, at least for Postgresql. (AFAIK MySQL doesn't have that kind of schema, not sure about SQLite.)
Last edited by Terr; May 29th, 2007 at 08:08 PM.
[HvC]Terr: L33T Technical Proficiency
Forum Rules
|
http://www.antionline.com/showthread.php?275391-PHP-and-database-schemas&p=925020
|
CC-MAIN-2017-34
|
refinedweb
| 123
| 75.61
|
VB 6.0 Atom client project
<p>Had a crack at a Visual Basic 6.0 <a href="">atom</a>-client yesterday.</p>
<p>
First thing was a medium to display the feed content.<br />
A multi-line textbox would be like the toilet on train, doesn't look much but works.<br />
Because XML is easily transformed into HTML, it would be a good idea to do that:<br />
<a href="">SHDocVwCtl </a> is (almost) everywhere a VB program might go, so it was a strong contender.<br />
<a href="">Gecko</a> is cool, fast, The Right Thing (tm) and so on, so I finally went with <a href="">Mozilla ActiveX Control</a>
</p><p>
I generally like to do as few external dependencies as possible,
so went on to experiment with navigating the HTML control to the feed,
and traversing that.
That sort of bogged me down.
I gave serious thought to loading the XML with an internet API call and somehow parsing the feed my self...
and then cowardly took the easy way out with <a href="">MSXML</a>. Of course, there is a <a href="">perverse thing</a> with MSXML 4, XPath and default namespaces which took about a year off my life expectancy...
</p>
<p>
Storage of the feeds was next.<br />
Since I use SQL databases as backends almost every day at work, and I've sometimes used Access mdb's for similar projects, that was sort of a given, <a href="">SQLite</a> it would be.<br />
Only I got sort of sidetracked during install and searching for info on a VB wrapper.
So, since it was three o'clock in the morning and I was supposed to stand in for my boss at work today, I gave in to the <a href="">Sandman</a>.</p>
<p>
I don't know why, but XMLHTTP didn't come to mind during the whole episode.
Maybe they haven't pushed <a href="">AJAX</a> enough on me or something. Weird.
</p>
<p>
Bottom line: I finished nothing that I would show even to my mother, but it isn't a hopeless task. (besides, I got all of Saturday and Sunday to play with it)
</p>
<p>
-- <br />
diwen</p>
VB 6.0 Atom client project
<p>Diwen, I used to have those problems until I changed my programming tool to one that is fully integrated. I hated to have all those external VB components all over the place, remembering to change this and that from here, this from over there, etc.., and seeing my program?s performance decrease and never being able to know when or why it happened.... Now I am really happy with my new development platform, I suggest you change to another IDE.</p>
VB 6.0 Atom client project
Alores, I'm right with You on changing platform.<br /><br />I couldn't deduce which platform You're referring to, <br />but I venture it's either RealBasic or Visual Basic.NET ?<br />If it isn't, or if You're talking about an actual <br />alternative IDE for VB 6.0, please do tell, <br />as I'm always interested in better ways of doing (stupid) things.<br /><br />Actually, I'm with the right-tool-for-the-right-job <br />kind of thinking. But that's true only while I'm being paid.<br />For leisure, I actually enjoy banging my head against a wall.<br /><br />The reason I'm hanging on with VB 6.0 is that <br />I like the challenge of pushing the envelope.<br />(The envelope in this case being my own <br />understanding and knowhow)<br /><br />I've been "sort of programming" with several <br />dialects of BASIC for many years, <br />during which I haven't actually progressed at all, <br />but have indeed developed a perverse <br />liking for doing silly things in a silly way.<br />I like VB6 because it still has <br />(at least for me) silly things to offer.<br /><br />...And as far as hobbies go, I only have these: <br />playing pool, drinking beer, and writing bad VB6 code.<br /><br />Bottom line:<br />I'm (at the moment) a professional VB6 programmer, <br />(that's step 1 isn't it?)<br />and for kicks I tend to do things the hard way.<br /><br />-- <br />diwen<br /><br />(btw, I am not a native speaker of english, <br />
so some puns not intended)
diwen's blog
This conversation is currently closed to new comments.
|
https://www.techrepublic.com/forums/discussions/diwens-blog/
|
CC-MAIN-2019-26
|
refinedweb
| 761
| 70.73
|
Most of the problems we have examined have had relatively simple solutions, because the data objects in the problem could be represented using the predefined Java types. For example, we can represent a volume with a double, a name with a String, a vowel with a char, and so on.
The problem is that real-world problems often involve data objects that cannot be directly represented using the predefined Java types. For example, suppose that we know a certain gourmet chef named Pierre whose recipes are written to make 12 servings. The difficulty is
We have provided two programs for today's exercise: Pierre1.java and Pierre2.java. In the first part of the exercise, we will do what is needed to get Pierre1.java operational, and in the second part of the exercise, we will extend the first part so that Pierre2.java is operational.
Begin by making a new project folder in which to store your work for this exercise. Then save copies of the files Pierre1.java, Pierre2.java, Fraction.java, in your project folder.
If you are using an IDE, create a new project named Pierre in your project folder, and add the files Pierre1.java and Fraction.java and the ann package to your project (but not Pierre2.java).
Open Pierre1.java and Pierre2.java and take a moment to compare the two programs. Each program implements the same basic algorithm:
1. Get oldMeasure, the fractional measure to be converted. 2. Get scaleFactor, the fractional conversion factor. 3. Compute newMeasure = oldMeasure * scaleFactor. 4. Output newMeasure.
Note that a solution to Pierre's problem is quite simple, given the ability to define, input, multiply and output fraction objects. The two programs differ only in how they output Fraction objects. Note also that some lines are "commented out" at present. We will be "uncommenting" these lines as we develop the functionality needed in order for such lines to work properly.
The difficulty is that there is no predefined Java type Fraction by which such objects can be defined or operated upon. In such situations, Java provides a mechanism by which a programmer can create a new type and its operations. This mechanism is called the class, which is the subject of today's exercise.
In Java, a new type can be created by
public class TypeName extends OtherType { }
where TypeName is a name describing the new type.
When objects and methods are declared inside the class, the kind of access that will be allowed must be indicated. A class has two basic kinds of access, public and private. The public parts are accessible by all other classes and are typically how methods are declared, and the private parts are accessible only by the class and are typically how class attributes are declared.
To illustrate, suppose that we want to define a new type whose objects can be used to store Cartesian coordinates. Such an object has two attributes: an X-value, and a Y-value, both reals. We can define data objects for these attributes as follows:
private double myX, myY;
(To distinguish attribute identifiers from other identifiers, we will place the word my at the beginning of an attribute's name.) We then surround them with an appropriately named class structure:
public class Coordinate extends Object { private double myX, myY; }
The result is a new type, named Coordinate, which can be used to declare objects, such as a point:
Coordinate point = new Coordinate();
The object to which point refers then has two real components, one named myX and the other named myY.
In general, the data portion of a class definition can be thought of as following this pattern:
public class TypeName extends OtherType { private Type1 AttributeList1 ; private Type2 AttributeList2 ; ... private TypeN AttributeListN ; }
where each Typei is any defined type; and each AttributeListi is a list of the attributes requiring an object of type Typei needed by our new object kind (TypeName).
Now, if we apply this approach to the problem we are trying to solve, we see that we need to identify the attributes of a Fraction object. If we examine a few fractions
1/2 4/3 4/16 16/4
we can see that each fraction has the form:
number1/number2
where number1 is called the numerator and number2 is called the denominator. The numerator and denominator are different from one fraction to another, and so these quantities must be recorded for any given fraction value; however the / symbol is common to all fractions, and so it need not be recorded as an attribute. A fraction thus has two attributes, its numerator and its denominator, both of which are integers.
Begin editing the file Fraction.java, and define two integer data objects named myNumerator and myDenominator to represent these two attributes. (Yes, you should prepend my to the beginning of each name, for reasons that will be apparent shortly). Be sure to arrange your class so that myNumerator and myDenominator are within the class and are private.
Given this declaration of Fraction, object declarations like this
Fraction oldMeasure; ... Fraction scaleFactor;
can be thought of as defining two data objects with the following forms:
The objects within a class object that store its state are usually called the attributes of that object.
Note that each object of type Fraction has its own copy of each of the attributes we defined. This is why we preface the names of these attributes with my, to indicate that from the perspective of the object, attributes are characteristics of that object. That is, within the Fraction object named oldMeasure, myNumerator refers to its first data object and myDenominator refers to its second data object. However, within the object named scaleFactor, myNumerator refers to its first data object and myDenominator refers to its second data object. Each object of type Fraction thus has its own separate state.
Since a class may contain an arbitrary number of attributes, a software model can be constructed for virtually any real-world object, simply by defining data objects for each of its attributes, and then surrounding those data objects with an appropriately named class structure.
In the source program Pierre1.java, the definition of oldMeasure is currently commented out. Modify your source program so that this declaration is no longer commented out (but the subsequent lines are). Then compile your source program, to test the syntax of what you have written. When it is correct, continue to the next part of the exercise.
Besides having attributes, a class can also have methods which provide a means by which the operations on a class object can be encoded. These methods are different from the static methods we have seen before in that they will work on the attributes of one particular object.
One of the characteristics of a class is that its attributes are kept private, meaning that a program using the class is not permitted to directly access them. (This is a good idea because a program that directly accesses the attributes of a class becomes dependent on those particular attributes. If those attributes are changed (which is not uncommon in class maintenance), then such programs must also be changed, increasing the cost of software maintenance.)
While it is a good idea for the attributes of a class to be kept private, we want users of the class to be able to perform operations on class objects. As a result, the methods should be declared as public, in contrast to the attributes.
This is the reason for the keywords public and private within the class. A common pattern for defining a class is
public class TypeName extends OtherType { private Type1 AttributeList1 ; private Type2 AttributeList2 ; ... private TypeN AttributeListN ; public TypeName1 MethodDef1 public TypeName2 MethodDef2 ... public TypeNameM MethodDefM }
All of the declarations that follow the keyword public can be accessed by programs using the class; and all of the declarations that follow the keyword private cannot be accessed by programs using the class.
By convention, all of the attributes of an object are grouped together.
We have seen that these methods are called differently from the static methods we have used earlier: if two String objects named greeting and closing are defined as follows:
String greeting = "hello", closing = "goodbye";
then the expression
greeting.size()
returns the value 5, while the expression
closing.size()
returns the value 7.
Object-oriented programmers like to think of a call to a method as a message to which the object responds. To illustrate, when we send the size() message to greeting, greeting responds with the number of characters it contains (5); and if we send the same size() message to closing, closing responds with the number of characters it contains (7). The effect of this approach is to shift the point of view from the method to the object.
In contrast, the static methods we used before perform are sent to the class itself not to an object of that class. Just as we can have static methods, we can have static attributes. Static attributes have just one copy and are not stored in each object of that class. Those attributes are effectively shared by all objects of the class are sometimes called class wide. While it is easy to access a static method or attribute from an objects regular methods, the reverse is not true.
Put differently, regular methods are usually written from the perspective of the class object and must have an object to work with. A static method performs its duties independently of the objects of a class and can work even if no object of that class exists.
In this first part of today's exercise, we will focus on adding the methods to class Fraction needed to get Pierre1.java operational.
To facilitate debugging a class, it is often a good idea to begin with a method that can be used to display class objects -- an output method.
From the perspective of our Coordinate class, we can specify the task of such a method as follows:
Receive: out, a Screen to which I am to write my values. Output: myX and myY, as a pair.
To define print(), we must put the code into the class definition as follows:
public void print(Screen out) { out.print( "(" + myX + "," + myY + ")" ); }
As explained earlier
Note also that the prefix my helps to reinforce the notion that this is a message to which a Coordinate object responds. That is, if point is a Coordinate object whose X-value is 3 and whose Y-value is 4, then the statement
point.print(theScreen);
displays via theScreen:
(3,4)
Similarly, if origin is a Coordinate object whose X-value is 0 and whose Y-value is 0, then the statement
origin.print(theScreen);
displays via theScreen:
(0,0)
Note that as a message to an object, a method must be invoked using dot notation, which specifies the object to which the message is being sent.
Note finally that as a message to which an object responds, a method can directly access the private attributes of the object.
Using this information as a pattern, define a similar print() method for your Fraction class, such that if oldMeasure is a Fraction whose numerator is 3 and whose denominator is 4, then a message:
oldMeasure.print(theScreen);
will display
3/4
Check the syntax of your method, and continue when it is correct.
An output operation for a class is of little use unless we are able to define and initialize objects of that class. The action of defining and initializing an object is called constructing that object. To allow the designer of a class to control the construction of class objects, Java allows us to define a method called a class constructor that specifies exactly what actions are to be taken when a class object is created. When a class object is defined, the Java compiler calls this method to initialize the object's attributes.
For example, suppose we would like for a definition of a Coordinate object:
Coordinate point = new Coordinate();
to initialize the attributes of the object to which point refers to zeros. We might specify this task as follows:
Postcondition: myX == 0.0 && myY == 0.0.
Note that a constructor does not return anything to its caller, it simply initializes the attributes of an object when that object is defined. We specify this behavior through a boolean expression that is true when the method terminates. Such an expression is called a postcondition, since it is a condition that holds when execution reaches the end of the method.
The name of a constructor is always the name of the class, so we define this method as public in the class Coordinate, as follows:
public Coordinate() { myX = 0.0; myY = 0.0; }
Given this method, when a Coordinate object is defined, the Java compiler will automatically call this code to initialize this object, which sets the object's myX and myY attributes to zero values.
Note that a constructor has no return type (not even void). As was mentioned earlier, this is because a constructor never returns anything to its caller -- it merely initializes objects of its class.
The pattern for a constructor is thus:
public ClassName(ParameterList) { StatementList }
where the ClassName refers to the name of the class and StatementList is a sequence of statements that initialize the attributes of the class. Constructors can take parameters, which are defined as they would be for any other method, and any valid Java statement can appear in the body.
Using this information, define a constructor for your Fraction class, that satisfies the following specification:
Postcondition: myNumerator == 0 && myDenominator == 1.
That is, the definition:
Fraction oldMeasure = new Fraction();
should initialize the attributes of oldMeasure appropriately to represent the fraction 0/1.
Constructors are only invoked by new. For example, consider the following seemingly legal code:
Fraction oldMeasure; oldMeasure.print(theScreen);
The first line does not create an object of type Fraction. The variable oldMeasure has been declared and given the special value null. If we send a message to a null object, an error will result. The java compiler will recognize this and generate a compilation error, to the effect that oldMeasure has not been initialized. This is a very common kind of error that occurs in Java. You are encouraged to try this code and see the error message that results so you will recognize it when you see it again.
Test the syntax of what you have written, and continue when it is correct.
A class can have multiple constructors, so long as each definition is distinct in either the number or the type of its parameters. Defining the same method multiple times is called overloading the method.
To illustrate, suppose that we would like to be able to explicitly initialize the X-value and Y-value of a Coordinate object to two values that are specified when the object is defined. We might specify this task as follows:
Receive: x and y, two double values. Postcondition: myX == x && myY == y.
We can perform this task by overloading the Coordinate constructor with another definition that takes two double arguments and uses them to initialize our attributes:
public Coordinate(double x, double y) { myX = x; myY = y; }
Given such a method, the Java compiler will process a Coordinate declaration statement like this:
Coordinate point1 = new Coordinate(), point2 = new Coordinate(1.2, 3.4);
by using our first constructor to initialize point1 (since it has no arguments), and use our second constructor to initialize point2 (since it has two arguments), resulting in objects that we might visualize as follows:
Using this information, define and prototype a second Fraction constructor that satisfies this specification:
Receive: numerator and denominator, two integers. Precondition: denominator != 0. Postcondition: myNumerator == numerator && myDenominiator == denominator.
That is, the definitions:
Fraction oldMeasure = new Fraction(); ... Fraction scaleFactor = new Fraction(1, 6);
should initialize oldMeasure to 0/1, and initialize scaleFactor to 1/6. Be sure that your explicit-value constructor checks its precondition (e.g., use the hoj.Assert class).
Use the Java compiler to test the syntax of what you have written. When the syntax is correct, use Pierre1.java to test what you have done, by inserting calls to print() to display their values:
... oldMeasure.print(theScreen); ... scaleFactor.print(theScreen);
When your methods are working correctly, remove this "test code" from Pierre1.java.
Any operations we wish to perform on a class object can be defined as public methods within the class. Accessors are methods that retrieve an attribute of the object. These are often simple one line methods. For example, it might be useful to be able to extract the x and y values of a Coordinate object. The first task can be specified as:
Return: my X-value.
while the second is
Return: my Y-value.
We would then define these simple methods in Coordinate, as follows:
public double getX() { return myX; } public double getY() { return myY; }
Given such methods, if two Coordinate objects point1 and point2 are as follows:
then the expression:
point1.getX()
evaluates to 0.1, while the expression
point2.getY()
evaluates to 8.9.
We should mention that it is Java's convention to give accessor methods the name getA() where A is the particular attribute we are retrieving. Thus, if we have used the name myA for our attribute, getA() is the name of the corresponding method to extracts the value of that attribute. For this reason, accessors are sometimes called getters by Java programmers. (Correspondingly, mutators methods that change an attribute A's value are named setA(), and are thus called setters.)
Using this information, add to class Fraction an accessor method getNumerator() that satisfies this specification:
Return: myNumerator.
and an accessor method getDenominator() that satisfies this specification:
Return: myDenominator.
Then test their syntax and continue when they are correct.
Once we are able to define Fraction objects, it is useful to be able to input a Fraction value. To illustrate, suppose that we wanted to input a Coordinate value:
(3,4)
We can specify the problem as follows:
Receive: in, a Keyboard. Precondition: in contains a Coordinate of the form (x,y). Input: (x,y), from in. Postcondition: myX == x && myY == y.
To solve the problem, we can define read() as a method that satisfies the specification, as follows:
public void read(Keyboard in) { String point = in.readWord(); StringTokenizer parser = new StringTokenizer(point, "(,)", false); if(parser.countTokens() != 2) throw new RuntimeException("bad format for coordinate"); double x = Double.parseDouble(parser.nextToken()); double y = Double.parseDouble(parser.nextToken()); myX = x; myY = y; }
Since we are reading the values entered by a (fallible) person, there are a number of things that could go wrong here:
To read what the user enters, we use the readWord() (or readLine()) methods from ann.easyio.Keyboard. To break the resulting String into its constituent parts, we can use Java's predefined StringTokenizer class::
StringTokenizer parser = new StringTokenizer(point, "(,)", false);This statement breaks the String named point into pieces called substrings, using the open-parenthesis, comma, and close-parenthesis symbols to separate the pieces. To illustrate, if the user enters (1.0,3.5), then our StringTokenizer will break the String "(1.0,3.5)" into "1.0" and "3.5". We can then retrieve these two parts by sending our StringTokenizer the nextToken() message twice, as shown above. (see GUI Interlude #2 for more information about it) that is stored in the package java.util, so to use it, you should add this line before the beginning of your Fraction class in Fraction.java:
import java.util.StringTokenizer;Once we have the string broken into its pieces, we need to convert those pieces to double values. We can do so using the parseDouble() method from the Double class, as shown above. If the user enters a non-numeric value (e.g., (one,three)), the parseDouble() method throws an exception. To avoid "corrupting" our instance variables myX and myY, we store the values returned by parseDouble() in local variables x and y, and when both parseDouble() calls have succeeded, we update our instance variables.
Given such a definition for read(), the following code:
Coordinate point = new Coordinate(); point.read(theKeyboard);
would read a Coordinate of the form (x,y) from theKeyboard.
Using this information, define an input method named read() for class Fraction. Your method should satisfy this specification:
Receive: in, an Keyboard. Precondition: in contains a Fraction value of the form n/d, such that d != 0. Input: n/d, from in. Postcondition: myNumerator == n && myDenominator == d.Note that since a Fraction stores int values (as opposed to the double values our Coordinate class stores), your method will need to use Integer.parseInt() method instead of Double.parseDouble().
When finished, you should be able to "uncomment" the statements:
oldMeasure.read(theKeyboard); ... scaleFactor.read(theKeyboard);
and read a Fraction value from theKeyboard into oldMeasure and scaleFactor. Put differently, we should be able to send a Fraction object the read() message, with the Keyboard from which it should read as an argument.
Test your input method by adding statements like those above to Pierre1.java, along with a print() that echos the input values back to the screen. Compile and run the program, and continue when read() works correctly.
We have seen that methods like constructors can be overloaded. Unfortunately, one of the design decisions of Java was not to allow operators, such as the arithmetic operators (+, -, *, /, and %) to be overloaded. We will need to define methods for those operations.
As an illustration, suppose that we want to permit two Coordinate objects to be added together. In the object-oriented world, an expression like
point1.add(point2)
is thought of as sending the add message to point1, with point2 as a message argument. That is, we can specify the problem from the perspective of the Coordinate receiving this message as follows:
Receive: point2, a Coordinate. Return: result, a Coordinate. Postcondition: result.myX == myX + point2.myX && result.myY == myY + point2.myY.
One way to define this method is as follows:
public Coordinate add(Coordinate point2) { Coordinate result = new Coordinate(myX + point2.getX(), myY + point2.getY()); return result; }
This definition uses our second constructor to construct and initialize result with the appropriate values.
While it is useful to define all of the arithmetic methods for a Fraction, the particular operation that we need in order to solve our problem is multiplication (the others, we leave for the exercises). From the preceding discussion, it should be evident that we need to define times so that the expression in Pierre1.java:
oldMeasure.times(scaleFactor)
can be used to multiply the two Fraction objects oldMeasure and scaleFactor. We can get some insight into the problem by working some simple examples:
1/2 * 2/3 = 2/6 = 1/3 3/4 * 2/3 = 6/12 = 1/2
The specification for such an operation can be written as follows:
Receive: rightOperand, a Fraction operand. Return: result, a Fraction, containing the product of the receiver of this message and rightOperand, simplified, if necessary.
From these examples, it should be apparent that we can construct result by taking the product of the corresponding attributes and then simplifying the resulting Fraction.
For the moment, let's ignore the problem of simplifying an improper Fraction. Extend your Fraction class with a definition of times that can be used to multiply two Fraction objects. Then test the correctness of what you have written by "uncommenting" the lines in Pierre1.java that that compute and output newMeasure. Continue when your multiplication operation yields correct, (if unsimplified) results.
Fraction Simplification. The main deficiency of our implementation of times is its inability to simplify improper fractions. That is, our multiplication operation would be improved if class Fraction had a simplify operation, such that fractions like:
2/6 6/12 12/4
could be simplified to:
1/3 1/2 3/1
respectively. Such an operation is useful to keep fractional results as simple and easy to read as possible. To provide this capability, we will implement a Fraction method named simplify(), such that a method like times can call it as shown in the following code:
public Fraction times(Fraction right) { // compute result... result.simplify(); return result; }
There are a number of ways to simplify a fraction. One straightforward way is the following algorithm:
0. Find gcd, the greatest common divisor of myNumerator and myDenominator. 1. If gcd = 0, then terminate 2. Replace myNumerator by myNumerator/gcd. 3. Replace myDenominator by myDenominator/gcd.
Notice that after this method has finished, the state of the object may have changed ( myNumerator and myDenominator may be different.) A method that changes the state of the object is called a mutator.
The implementation file Fraction.java contains a method greatestCommonDivisor() that implements Euclid's algorithm for finding the greatest common divisor of two integers. Notice that it is a static method inside our class, so we can just call it directly (i.e., just do greatestCommonDivisor()). Using method greatestCommonDivisor() and the preceding algorithm, define method simplify() as a method of class Fraction.
We have now provided all of the operations needed by Pierre1.java, so the complete program should be operable and can be used to test the operations of our class.
In this second part of today's exercise, we add the functionality to class Fraction in order for Pierre2.java to work properly.
If you are using an IDE/project, remove Pierre1.java from it. Then add Pierre2.java.
Open Pierre2.java for editing.
While we have provided the capability to output a Fraction value via a print() method, doing so requires that we write clumsy code like:
theScreen.print("\nThe converted measurement is: "); newMeasure.print(theScreen); theScreen.println();
instead of more elegant code like:
theScreen.println("\nThe converted measurement is: " + newMeasure);
Put differently, our print() method solves the problem, but it doesn't fit in particularly well with the rest of Java's output operations. It would be preferable if we could use the print and println methods to display a Fraction value.
To do so we need to understand a little bit about println. The statement
Coordinate someWhere = new Coordinate(4,3); theScreen.println(someWhere);
is equivalent to
Coordinate someWhere = new Coordinate(4,3); String aString = someWhere.toString(); theScreen.println(aString);
In other words, Java assumes that your object understands the toString() message and will return an appropriately defined String. (Via the mechanism of inheritance, which we will discuss later, all classes understand the toString() message, but what is returned is not usually appropriate for friendly output.)
To do this, we would define the following method inside the class Coordinate.
public String toString() { return "(" + myX + "," + myY + ")"; }
Consider that with this new method we now have two methods that do almost the same job.
public String toString() { return "(" + myX + "," + myY + ")"; } public void print(Screen out) { out.print( "(" + myX + "," + myY + ")" ); }
While this code is not incorrect, it can present a maintenance problem. Suppose that some time in the future we wish to change the format for printing a coordinate. For example, we might want to use square brackets instead of parentheses. To remain consistent, we must change both methods. For a small class like Coordinate this might not seem like a big problem, but as classes expand it is easy to miss making a change. To fix this, we should redefine one of these methods in terms of the other.
This is a general software design technique that helps improve ones ability to write and maintain code. If we have two chunks of code that do the same job, factor that code into a separate method. While it was not obvious now, that is the reason we wrote a simplify() method. Other methods besides times() will want to provide results in simplified form. By doing it just once, we are less likely to make a mistake. If we have made a mistake, we only need to fix the code in one easily located place.
The natural way to factor out the common expression in our previous example is:
public String toString() { return "(" + myX + "," + myY + ")"; } public void print(Screen out) { out.print(this.toString()); }
In the print method this refers to the object that the print message was sent to. So the toString() message is sent to that same object and then the string that is returned will be printed. Now there is only one piece of code that would need to change if we wanted to change the format.
Using this information, add a toString() method to the class Fraction, so that if the value of newMeasure is 1/2 , then
theScreen.println("\nThe converted measurement is: " + newMeasure);
will cause
The converted measurement is: 1/2
to be displayed on the screen. Test what you have written and continue when it is correct. Remember to use good design and factor out the common code.
Now that you have seen how to build a class, we need to expand our design methodology to incorporate classes:
1. Describe the behavior of the program. 2. Identify the objects in the problem: If an object cannot be directly represented using available types: Design and implement a class to represent such objects. 3. Identify the operations needed to solve the problem: If an operation is not predefined: 1) Design and implement a method to perform that operation. 2) If the operation is a message that will be sent to an object: Design and implement the method as non-static Otherwise (the operation is a message that will be sent to a class): Design and implement the method as static 4. Organize the objects and operations into an algorithm.
Using this methodology and the Java class mechanism, we can now create a software model of any object! The class thus provides the foundation for object-oriented programming, and mastering the use of classes is essential for anyone wishing to program in the object-oriented world.
Learning to design and implement classes is an acquired skill, so feel free to practice by creating software models of objects you see in the world around you!
Class, Attribute, Method, Precondition, Postcondition, Constructor, Accessor, Mutator, Overloading, Factoring code, toString().
Hard copies of your final version of Fraction.java, Pierre1.java, Pierre2.java, and an execution record showing the execution of each program.
Back to This Lab's Table of Contents
Back to the Prelab Questions
Forward to the Homework Projects
Back to the Table of Contents
Back to the Introduction
|
https://cs.calvin.edu/activities/books/java/intro/1e/HandsOnJava/labs/6/exercise/index.html
|
CC-MAIN-2018-09
|
refinedweb
| 5,083
| 54.52
|
Sc:
def ~=(x: Double, y: Double, precision: Double) = { if ((x - y).abs < precision) true else false }
You can use this method like this:
scala> val a = 0.3 a: Double = 0.3 scala> val b = 0.1 + 0.2 b: Double = 0.30000000000000004 scala> ~=(a, b, 0.0001) res0: Boolean = true scala> ~=(b, a, 0.0001) res1: Boolean = true
Discussion
When you begin working with floating-point numbers in programming, you quickly learn that
0.1 plus
0.1 is
0.2:
scala> 0.1 + 0.1 res2: Double = 0.2
But 0.1 plus 0.2 isn’t exactly 0.3:
scala> 0.1 + 0.2 res3: Double = 0.30000000000000004
This subtle inaccuracy makes comparing two floating-point numbers a real problem:
scala> val a = 0.3 a: Double = 0.3 scala> val b = 0.1 + 0.2 b: Double = 0.30000000000000004 scala> a == b res4: Boolean = false
As a result, you end up writing your own functions to compare floating-point numbers with a precision (or tolerance).
As you saw in Recipe 1.11 of the Scala Cookbook, you can define an implicit conversion to add a method like this to the
Double class. This makes the following code very readable:
if (a ~= b) ...
Or, you can add the same method to a utilities object, if you prefer:
object MathUtils { def ~=(x: Double, y: Double, precision: Double) = { if ((x - y).abs < precision) true else false } }
You can then invoke that method like a static method in Java:
println(MathUtils.~=(a, b, 0.000001))
With an implicit conversion, the name
~= is very readable, but in a utilities object like this that name doesn’t look right, so it might be better named
approximatelyEqual,
equalWithinTolerance, or some other name.
|
https://alvinalexander.com/scala/how-to-compare-floating-point-numbers-in-scala-float-double/
|
CC-MAIN-2022-05
|
refinedweb
| 289
| 79.46
|
W3C * TAG
agenda * minutes * photo
(Aim: Review Status and advance discussion).
(Aim: reach concencus/finding on 2 or 3 *big* ones)
Issues (in no particular order):
All TAG members were present.
See also: photo.
SW: We probably need to get the writing assignments integrated then we can take on more responsibility again
TB: We either need to get more of Ian's time or find another resource. And we need to decide this soon.
PC: Negotiate with the AB to push the current process document out so that we can get more of Ian's time
TB: What are our timing constraints?
PC: We should try to get something out by the beginning of the summer
DO: Issue 7 has really taken a lot of time because it
DO: is important. We need to finish that up.
TBL: hopefully after we get the arch. document done, we'll be able to do findings when we find points of contention
ACTION: TBL to discuss the negotiation of Ian's time with the AB
For telcon agendas, we'll focus on one or two issues (b/i on the agenda)
Some discussion of moving the telcon time; delayed until Chris arrives
PC: It would help if the agenda or the person arranging the meeting could give explicit pointers to new threads that are going to be discussed
Proposal: move the telcon to Monday, 15:00 eastern
RESOLVED.
We'll stick with simple prioritisation to maintain common focus (Issues list/b/i from the agenda)
TBL: what happens if the TAG reaches a finding but isn't really done because we need to coordinate with some other group
TBL: somewhere we should keep a list of contentious laison issues
PC: we can't assume that the quarterly IETF/W3C meetings are the target points for getting work done
TB: what we really should do is probably what CL and I did: join the mailing list and contribute a few messages
RF: we can't learn new information from the IESG because they're not looking at brand new issues
PC: so we should always clearly identify who the other parties are when we have a liason issue
TB: the take-away is that by default we should try to deal with liason issues at a grass-roots, peer-to-peer level. And if that doesn't work, it's not clear what to do.
TBL: but we should catalog it so that we can answer the question when it's asked
SW: have we had any feedback from the AC/AB about our reports? And CL pointed to an article about the TAG
PC: we should wave the flag more when the architecture document is ready for review
CL: we should do a press release when we publish the first architecture document
General consensus is that the signal to noise ratio has gotten better
[[[Break]]]
TBray takes over scribing at 11 AM.
Move to issues, push Architecture-doc issues to later.
DO: I posted yesterday a draft of a proposal for a GET binding for some HTTP POST binding.
Proposal: TAG should do the work, not current XMLP WG.
TBray: Reasons are political; XMLP is tired and wants to get their doc out; WS Arch group could take it on but that would stretch way out into the future. So I agree TAG should do it.
TBL: Is there a risk of infringing on other WG's jurisdiction and causing bad feelings?
DO: In general yes, but in this case the relevant WG is feeling closed to finished and is tired and would probably welcome TAG intervention.
DC: Does this mean we agree that GET should be used?
TB: Even if you only thinik a small proportion of WS is amenable to get, the cost is low so why not do it?
RF: Danger of putting the wrong information while encoding method calls in URI. So maybe the service should have the responsibility of publishing a GET-able URI for the info it makes available. If it's important content, then it has an identifier.
CL: Process issue; danger of interference. Scenarios where we intervene arbitrarily is different from that where there's a desire for help from the WG. Danger of groups in trouble appealing to the TAG for rescue.
PC: Current binding HTTP SOAP binding is POST only. Are you proposing an alternative mapping?
DO: An additional mapping.
PC: So if this were being done by XMLP, it would be in part 2 of SOAP, bindings? (Yes) Why is it important not to disturb the current SOAP process? Is the real reason that they're tired or they just want to get 1.2 done and more to other things? So is this just an alternative? In which case we could do a NOTE. Or is this something that really should be normatively in SOAP? If so, it might be a real mistake for us to do it.
TBL: Current SOAP spec says there are several bindings, of which they provide one, via POST.
PC: Is this effectively a third binding?
SW: What's in the space now: We've established a framework for bindings, describing message exchange patterns, bindings say they support particular patterns. This could be a new binding with new patterns.
DO: Thinking more that this would support existing request/response pattern. And in fact it must be mechanically transformable.
TBray: Not only in principle transformable, but in practice not difficult.
DC: SOAP 1.2 should say that use of GET is desirable, and perhaps have a pointer to a NOTE by Orchard et al. Also I redrafted my finding.
DO: I talked to David Fallside, who seemed to welcome our work in this area.
[TAG walked through most recent Connolly draft, general approval] $Revision: 1.19 $ of $Date: 2002/07/16 16:12:48 $
DO: Note that XForms will also have to URLencode its stuff; this should be consistent with SOAP URLencoding.
CL: Horrible problem with character encoding in URL-encoded or message body, XML message body makes it go away, but loses advantages of GET.
RF: Partial solution is a hidden form field saying which charset; but that doesn't solve 100% of problem.
TBray: General solution as in IRI and XML 1.0 spec; always use UTF-8 along with %-encoding.
RF: Problem is, browsers don't do that.
CL: Problem with big URIs; take a UTF-16 Chinese text, put it in UTF-8 and %-encode and you're going to hit 4K chars in your URI pretty darn quick. Doesn't want %-encoding to become the normal way to ship XML around the Net.
PC: People jave tools to build WS; default should not be POST.
DC: Yes, it has to be POST, because the web service might not be SAFE.
CL: ACTION to add a concern re non-western characters in this scenario.
RESOLVED by consensus to add another bullet to the top of the doc saying GET should be used for safe/idemportent operations.
DC ACTION to write up limitations in draft in re SOAP.
ACTION TBray & DC & CL to polish up DO's 0.1-level draft & find out what's going on with XForms.
ACTION DC add pointer to PROPfind to limitiations/problems section.
DC/TBL: Should we think about adding a QUERY method, or send a body with GET?
RF: HTTP already allows that... but you lose the advantages of a small URI. DC action item to summarize history, given that the QUERY idea has come up more than once previously; outline all 4 corners of the design space.
ACTION DC to republish. RESOLVED: default is that it gets adopted as a finding.
DC action item to point out that WSDL should take this into account & also SOAP primer.
Discussion on process: TBL & PC point out that this has an obvious place in the work of and the spec produced by XMLP, and it seems wrong to take this out of their hands.
TBray: I want this out in the 1.2 timeframe, so let's do it in parallel.
DO: Options: - force it into SOAP 1.2 - let it into SOAP 1.3 - do it parallel e.g. in the TAG His preference is not to force it into 1.2.
PC is not comfortable with the TAG trying to take this on in parallel with the endgame on SOAP 1.2.
DO, PC, TBray: Strong desire not to slow down SOAP 1.2.
Straw poll on: Make it a requirement for SOAP 1.2, or allow solution at a later date:
CL: Later OK
DO: Later OK
SW: Either way
PC: Later OK
TBL: Abstain
DC: preference for Now, but both acceptable
NW: Worried about consequences of letting through, but don't want to hold it up.
TB: Like Norm
RF: Later OK if we can get a NOTE in front and institute work item immediately. Brief discussion of DO's draft.
RF: Rather than construct URI equiv to SOAP structure, why not put that in the WSDL or other higher-level spec.
TBray: because arch is layered and SOAP has to stand on its own.
DO: Explanation of difference between RPC and doc-literal style of output.
TBL: TAG's desires are clear: we want our last call issue to result in this capability going into 1.2 at the current time. On the other hand, this is tough managerially and people like David Fallside need to brought into the loop. We get things like this all the time, where there's pressure to get a spec out but also pressure to cover an important missing point.
PROPOSAL: The TAG finds that the absence of a GET binding in SOAP is incompatible with the architecture of the Web, in that it contravenes the architectural principle that important information resources should have URIs.
break for lunch.
Minutes of TAG F2F meeting taken after lunch and beach walk by Paul Cotton. The proposal on the table just before lunch was the following:
Proposal: The TAG finds that the absence of a GET binding in SOAP is incompatible with the architecture of the Web, in that it contravenes the architectural principle that important information resources should have URIs.
TB: Even if get concensus on this proposal we may want to discuss our intent with the XML Protocol WG members and chair.
CL: How much lag do you want?
TBL: Any objection?
PC: Paul abstains unless we make clear what we want to say offline.
TB: What about adding a para that states: "The TAG appreciates the urgency in completing the SOAP 1.2 specifications and wants to work with those concerned to address this problem with the least disruption possible." There was no objection to the proposal as amended..
Next issue from agenda:
"It seems to be that the RDFCore and XML Schema WGs (at the very least) ought to develop a common, reasonably acceptable convention as to the mapping betweeen QNames and URIs. Perhaps this is an issue that the TAG ought to consider (because it is a really basic architectural issue)."
DC: XML Schema produced URIRefs for each of the XML Schema primitive types. Dan used an example to indicate how to use the QName "decimal" along with the Namespace for the XML Schema types. But it is not necessarily possible to concatenate the QName to the specified Namespace to create a URI.
TBL: This was done as per the original Cambridge Communique.
TB: Confirmed that RDF requires or at least wants that you can concatenate something in a namespace (e.g. the QName) to the Namespace URI itself.
DC: The XML Schema spec does not map this pair of names into URI space. The TAG might want to support my view that this should be added to the XML Schema 1.x or 2.0 Requirements.
PC: Why did Schema do this?
DC: Since it is possible to have multiple things in an XML Schema with the same name and mapping to the URI space could cause a conflict.
PC: Do we really have an issue we understand here?
TB: I agree but the issue may be here that RDF and others are having troubles importing data types from XML Schema. These people need a good way to "point into XML Schema".
DC: Maybe the TAG could simply support the comment.
RF: Maybe we should consider the issue of RDF doing concatentation to produce URIs.
TB: The way RDF uses namespaces is obviously different from the way others use Namespaces.
RF: Some implementations strip trailing "#" and this might also be a problem.
TB: We should agree that however we resolve this the resulting URIs need to be unique.
DC: It sounds like the TAG does not understand this problem well enough to support the referenced comment above. But if it does then maybe it can support the above comment.
PC: Is the problem the lack of mapping XML Schema types to URIs or is it just a manifestation of what many WGs are doing today. For example the XML Query Functions and Operators Namespace does not provide a URI for each of the functions since it does not define how to concatenate the function names and the Namespace URI.
TB: Everyone would agree that items in an XML Namespace should be named but do we have agreement that they should be used in combination to produce a dereferenceable item.
TBL: Either we attack this problem as a series of special cases or should we attack the problem by re-examining the original premise in the Namespaces 1.0 specification.
TB: This is an ugly piece of work but it would be an architecture bonus. In addition you have to choose whether to produce the URI via concatentation as done by RDF or by using some other technique.
CL: We should be able to define any algorithm for the creation the URI e.g. concatentation is not the only possibility.
TB: If you give a universal rule for turning a Qname into a URI then we need to have an answer right up front as to whether they are de-referencable.
RF: Notet that the way you choose a Namespace URI will actually impact the algorithm you choose to combine the QName with the Namespace URI to produce a URI. You may want the Namespace URI to end in a "/" to make the concatenation to work.
DO: TB, is this coupled to the discussion to the Namespace document issue?
TB: Yes. TBL and DC: Both stated that having a document at the end of a Namespace URI is a good idea.
TBL: The construction technique may also require you be able to break the constructed URI apart to get back the original Namespace name and the QName.
CL: Since the W3C Pubrules required IDs on every referenceable part of the spec and that is probably is a reasonable place to point since it actually is a human readable description of the item being described.
Break at 3:04pm.
(discussion on rest of agenda)
TBL: Who to pass this resolution to - XML Core?
TB: Suppose we did concat. Support we decided ns docs were RIDDL-with-RDF. Suppose we had a standardized vocab. Then you could deterministicly take the concated URI and find what you needed.
CL: deterministically meaning a computer can do it
TB: yes
PC: So what types of information about a function would you want?
TB: RIDDL purpose=humann readable documentation for example. Notice that this *does* require tuples .... Not worth doin wname to URI mapping unless we go the whole hog and say how to get the real info.
DC: They are seperable, but the world does not want a seperable solution People want all the parts.
PC: Many people happily using spec as is and not requiring dereferencability. Concern over backwards compatability.
PC: So do we write this or assign it to a WG
TB: The mechanism to use those URIs is a big item so we should find out what the priority is on this work
CL: Briefing pacjage, or existing WG?
DC: Depends o resource needs
NW: Its a big chunk of work
TBL, TB: RDF is a big chunk of the solution
PC: Or that the only reason we need it is because RDF did it already
DO: So is this important to RDF specifically of Web in general
PC: Not clear
TB: History shows that , to the extent you tie stuff into URI space, you win in general
PC: Problem with existing non-dereferencable namespaces. Need to show a clear benefit for existing customers.
TB: Automatically discoverable stuff lowers support costs
PC: Concern that offline working continues to work TBL: Example of GUIDs, either works or it says its broken. URI reference table will be a central part of design; if online you can get extra information. Well known namespaces burned in, no need to dereference.
CL: Clarification - so you don't always need to dereferece a ns uri it might be cached, or compiled in, etc
TL, PC: Yes (exanmple of precompiled schemas in SOAP)
DC: Problems when ns info changes
PC: Very much so (examples)
CL: So namespaces never change once published?
TB: Can't generalize
PC: Acceptability is based on showing benefits outweigh costs
TB: Take this to AC, see if they think it is important
DO: Can't just do that, need mor einfo for AC to make informed votes. Needs a strawman proposal.
SW: Not clear anyone but RDF needs this
DC: Feedback shows the disconnect betwen RDF and other stuff has a cost.
TB: Existing code works. Problem is the expectation of dereferencability will be generated.
PC: Pubrules cost to even getting the human readable ns doc published. More work to get the additional data in the uprated ns doc. Its a higher bar, and needs to have a clear benefit.
TBL: So cost is deployment of ns docs. Expect that there would be an API to find what is known about a given ns API.
TBL: Same for GUID currently
DC: No - have to know what it is first, if its a class etc.
PC: Should get a discussion started on www-tag, before putting this to AC
ACTION: TB get such discussion started
RF: Not clear we need to solve both parts at one. Value in getting the first part done - how to compose the URI
TB: At technical level i agree with Roy; having been beaten up over fuzziness of ns dereferencing, don't want to go there again.
NW; Agree there is value in just the first part, we are fairly sure we know what a solution looks like, tat we understand the prolem. Not so clear for the second part. TBM: TAG can say there must be an algo for making a URI from a tuple. Decoupling seems important architecturally
TB: Already said its desirable to be dereferencable.
RF: Only get full benefit when dereferencable. Value to make future-proof uris.
TB: Agree the can be de=coupled, but you will get shot and accused of sloppiness if you don't do the whole thing.
DO: Is community more used to partial solutions? (assorted surfing analogies.....)
TB: Developers see no problem, its the theologians
RF: Or folks like RDF that are doing different things than originally envisaged they would be used for.
TBL: (example of dereferencing what a footnote element means in a MS namespace)
TB: One of the questions to take to www-tag should be usefulness of doing a partial solution.
============ issue not closed ======
TBL: (summarises previous discussion, top-down namespace meaning )
DO: So this is a document and not a message? Reference to media type
TBL: clarifies this is application/xml and */*+xml
TB: We are talking aboit resource representations, without additional metadata
DO: So this applies to application/soap?
TBL: yes RF: What is the issue, exactly?
TBL: Can't know the meaning of any innter namespace unless we know the outer layers.
NW: Studies this and came to the opposite conclusion to TimbBL
DC Want a test case to try against yout two designs.
NW There is always an outer layer of context.
TB: xml:lang is an example, not dependent on anything at all in terms of context.
TBL: Counter-example: tim-rot-13 namesoace
CL: More realistically, xml encryption. (rathole on realism or lack therof of examples)
TBL: XSL-T transform inside an XML Query example, something like that TBL: (draws on whiteboard)
<foo xsl: ... <xslt:whatever> .. </xslt:whatever> <xmlenc:> </xmlenc> ... </foo>
NW: add xsl:version attribute on root, or example does not work (added above)
TB: media type is high, first off. So if this says its xsl, then process it as xsl, etc. Thus, never ever use application/xml
TB: Secondly, architecturally unsound to make proc decisions ased on ns of elements without regard to ns of enclosing elements and mime type.
TBL: describes a pipeline where each stage produces a different mime type, then says that it does not scale, and sucks. Top-down, xml function based on namespaces and ignoring mime type, recursive descent, is better
RF: Three broken specs, that did not define their root element.
NW: (adds an Xpath for the third following sibling, so walks out of the subtree)
PC: You said that you can't do anything without the ns of its encclosing element. Its an NxN problem.
TB: Top level heading is done on http headers. If served as html, send to html processor,etc. Correct way to deal with enclosed namespaces varies by application. So in practical terms, are you wantin ti require any s/w be prepared to do arbitrary dispatching to other modules based on ns?
TBL: Not constraining software, but want to have the defined output document to a series of recursively explanded functions Can do lots of ways as long as you get the correct result.
TB What sj=hould the software do?
TBL: Not wanting to say what S/w should do, just what the result is.
DC: So definitions of ns should say what to do with unknown ns.
NW: Points to pipeline language submission
TBL: Its another function.
DO: So in case of application/xml, this dispatching is what you do. Other mime tpes get to make their own definitions.
CL: The special case of dispatching on an attribute, not on a ns, was not covered. Oh and btw this example is never served as *the content*, but as a stylesheet linked from the content
RF: Mime is not layered, this is a problem. Need to store the layering of twhat the outer types were.
TB: Seeing that pipeline efficiencies can be gained from the functional model. Could avoid a series of batch processes.
TBL: References should always be to the final tree. Mutally recursive would blow up ;-)
RF: Leads to much need for escaping elements. Don't do multiple layers in XML.
DO: XML enc makes it explicit how it does the layering. Sepc should also say how it does layering of itself.
TB: attribute is redundant - it only makes a difference if it has already been sent to an xsl processor.
NW: Worried I can't write the 'count the tags' processor because the xsl-t always gets expanded
TBL: Social contract of the Web @@@ scribe did not follow@@
DC: Example of recipeML that I just made up, plus a PU to some CSS
CL: Plus eXLink
DC: Not yet, lets do the simple case first (no resolution really)
DO: So what is the actual problem? I have to create my own media types and processing model for anything to work. Lowers the bar for general xml processors
DC: No, raises it
TB: Lowers the bar for language designers
DO: yes. So what is te brief problem summaty? We can't make generalised processors because we don't know how to mix namespaces in the general case.
PC: how this links to xml processing workshop results
TBL: PC: Processing language is fines, since I can choose not to use it. But clearly there is no default processing model, ws was clear it did not want one. No one size that fits all.This is an attempt to make a default processing model.
DO: Default processing model for application/xml only?
TB No, all xml DO: So application/soap is ok wheras application/soap+xml would give me this model.
TBL: Yes.
TB Media regirstation gets to say what its processing model is
TBL: media type registration cannot define elements that redefine meaning of parents or siblings.
DO: So per-spec processing models could be done
TBL: Its too complex
DO: Paul, why do you object? A vocabulary can still ignore the processing model. TAG needs to ensure it does not create new problems
SW: RDF saw this problem, its not well defined
TBL: XInclude and XEnc for example, can you nest one inside the other? Tools don't do it, its not defined.
TBL: Hacked versions with explicit processing based on makefiles work.
PC: So if you were voting on doing a processing language, you would vote no?
TBL: Correct, it should be a simple processing model.
PC: Composability is very important.
RF: XML should be a fully self descriptive language, but it isn't, yet. An element needs to be able to say, process me first
CL: That doesn't scale.
SW: Closing remarks?
PC: Need to seek direction from AC on direction of XML activity.
ACTION DC: Edit raw minutes to nice HTML public minutes.
DC: Proposes SW as co-chair of TAG So RESOLVED
Next f2f meetings
24-25 Sept in Vancouver
18 November, Boston (twinned with next AC meeting)
(regrets RF, ApacheCon)
SW: Offers to host in Bristol
PC: Next should be at tech plen, Mar 3-7 2003 in Boston, those with 1 year terms will not be there
based on:
DanC, Norm W, Tim Bray, Paul C, and Chris L, scribes, for TimBL and Stu,
co-chairs
$Revision: 1.19 $ of $Date: 2002/07/16 16:12:48 $ by $Author: connolly $
|
http://www.w3.org/2001/tag/2002/0505-agenda
|
CC-MAIN-2014-10
|
refinedweb
| 4,356
| 71.55
|
Blazor 0.4.0 experimental release now available
Daniel
Blazor 0.4.0 is now available! This release includes important bug fixes and several new feature enhancements.
New features in Blazor 0.4.0 (details below):
- Add event payloads for common event types
- Use camelCase for JSON handling
- Automatic import of core Blazor namespaces in Razor
- Send and receive binary HTTP content using HttpClient
- Templates run on IIS Express by default with autobuild enabled
- Bind to numeric types
- JavaScript interop improvements
A full list of the changes in this release can be found in the Blazor 0.4.0 release notes.
Get Blazor 0.4.0
To get setup with Blazor 0.4.0:
- Install the .NET Core 2.1 SDK (2.1.300 or later).
- Install Visual Studio 2017 (15.7) with the ASP.NET and web development workload selected.
- Note: The Blazor tooling isn’t currently compatible with the VS2017 preview channel (15.8). This will be addressed in a future Blazor release.
-.4.0
To upgrade an existing Blazor project from 0.3.0 to 0.4.0:
- Install all of the required bits listed above.
- Update your Blazor package and .NET CLI tool references to 0.4.0..4.0" /> <PackageReference Include="Microsoft.AspNetCore.Blazor.Build" Version="0.4.0" /> <DotNetCliToolReference Include="Microsoft.AspNetCore.Blazor.Cli" Version="0.4.0" /> </ItemGroup> </Project>
Event payloads for common event types
This release adds payloads for the following event types:
Thank you to Gutemberg Ribeiro (galvesribeiro) for this contribution! If you haven’t checked out Gutemberg’s handy collection of Blazor extensions they are definitely worth a look.
Use camelCase for JSON handling
The Blazor JSON helpers and utilities now use camelCase by default. .NET objects serialized to JSON are serialized using camelCase for the member names. On deserialization a case-insensitive match is used. The casing of dictionary keys is preserved.
Automatic import of core for Blazor namespaces in Razor
Blazor now automatically imports the Microsoft.AspNetCore.Blazor and Microsoft.AspNetCore.Blazor.Components namespaces in Razor files, so you don’t need to add
@using statements for them. One less thing for you to do!
Send and receive binary HTTP content using HttpClient
You can now use
HttpClient to send and receive binary data from a Blazor app (previously you could only handle text content). Thank you Robin Sue (Suchiman) for this contribution!
Bind to numeric types
Binding now works with numeric types:
long,
float,
double,
decimal. Thanks again to Robin Sue (Suchiman) for this contribution!
Templates run on IIS Express by default with autobuild enabled
The Blazor project templates are now setup to run on IIS Express by default, while still preserving autobuild support.
JavaScript interop improvements
Call async JavaScript functions from .NET
With Blazor 0.4.0 you can now call and await registered JavaScript async functions like you would an async .NET method using the new
RegisteredFunction.InvokeAsync method. For example, you can register an async JavaScript function so it can be invoked from your Blazor app like this:
Blazor.registerFunction('BlazorLib1.DelayedText', function (text) { // Wait 1 sec and then return the specified text return new Promise((resolve, reject) => { setTimeout(() => { resolve(text); }, 1000); }); });
You then invoke this async JavaScript function using
InvokeAsync like this:
public static class ExampleJSInterop { public static Task<string> DelayedText(string text) { return RegisteredFunction.InvokeAsync<string>("BlazorLib1.DelayedText", text); } }
Now you can await the async JavaScript function like you would any normal C# async method:
var text = await ExampleJSInterop.DelayedText("See ya in 1 sec!");
Call .NET methods from JavaScript.
To invoke a .NET method from JavaScript the target .NET method must meet the following criteria:
- Static
- Non-generic
- No overloads
- Concrete JSON serializable parameter types
For example, let’s say you wanted to invoke the following .NET method when a timeout is triggered:
namespace Alerts { public class Timeout { public static void TimeoutCallback() { Console.WriteLine('Timeout triggered!'); } } }
You can call this .NET method from JavaScript using
Blazor.invokeDotNetMethod like this:
Blazor.invokeDotNetMethod({ type: { assembly: 'MyTimeoutAssembly', name: 'Alerts.Timeout' }, method: { name: 'TimeoutCallback' } })
When invoking an async .NET method from JavaScript if the .NET method returns a task, then the JavaScript
invokeDotNetMethodAsync function will return a
Promise that completes with the task result (so JavaScript/TypeScript can also use await on it).:
Thanks for trying out Blazor!
|
https://devblogs.microsoft.com/aspnet/blazor-0-4-0-experimental-release-now-available/
|
CC-MAIN-2019-18
|
refinedweb
| 713
| 52.26
|
table of contents
other versions
- buster 4.16-2
- buster-backports 5.02-1~bpo10+1
- testing 5.03-1
- unstable 5.03-1
NAME¶abort - cause abnormal process termination
SYNOPSIS¶
#include <stdlib.h>
void abort(void);
DESCRIPTION¶The abort() first unblocks the SIGABRT signal, and then raises that signal for the calling process (as though raise(3) was called). This results in the abnormal termination of the process unless the SIGABRT signal is caught and the signal handler does not return (see longjmp(3)).
If the SIGABRT signal is ignored, or caught by a handler that returns, the abort() function will still terminate the process. It does this by restoring the default disposition for SIGABRT and then raising the signal for a second time.
|
https://manpages.debian.org/buster-backports/manpages-dev/abort.3.en.html
|
CC-MAIN-2019-47
|
refinedweb
| 125
| 67.76
|
Ray Serve FAQ¶
This page answers some common questions about Ray Serve. If you have more questions, feel free to ask them in the Ray Github Discussions page.
Contents
-
How do I delete backends and endpoints?
How do I call an endpoint from Python code?
How do I call a method on my replica besides __call__?
How do I enable CORS and other HTTP features?
How do
ServeHandleand
ServeRequestwork?
-
Does Ray Serve use Flask?
Can I use asyncio along with Ray Serve?
Are there any other similar frameworks?
How does Serve compare to TFServing, TorchServe, ONNXRuntime, and others?
How does Serve compare to AWS SageMaker, Azure ML, Google AI Platform?
How does Serve compare to Seldon, KFServing, Cortex?
How does Ray Serve scale behave on spikes?
Is Ray Serve only for ML models?
How do I deploy serve?¶
See Deploying Ray Serve for information about how to deploy serve.
How do I delete backends and endpoints?¶
To delete a backend, you can use
client.delete_backend.
Note that the backend must not be use by any endpoints in order to be delete.
Once a backend is deleted, its tag can be reused.
client.delete_backend("simple_backend")
To delete a endpoint, you can use
client.delete_endpoint.
Note that the endpoint will no longer work and return a 404 when queried.
Once a endpoint is deleted, its tag can be reused.
client.delete_endpoint("simple_endpoint")
How do I call an endpoint from Python code?¶
Use
client.get_handle to get a handle to the endpoint,
then use
handle.remote to send requests to that
endpoint. This returns a Ray ObjectRef whose result can be waited for or retrieved using
ray.wait or
ray.get, respectively.
handle = client.get_handle("api_endpoint") ray.get(handle.remote(request))
How do I call a method on my replica besides __call__?¶
To call a method via HTTP use the header field
X-SERVE-CALL-METHOD.
To call a method via Python, use
handle.options:
class StatefulProcessor: def __init__(self): self.count = 1 def __call__(self, request): return {"current": self.count} def other_method(self, inc): self.count += inc return True handle = client.get_handle("endpoint_name") handle.options(method_name="other_method").remote(5)
The call is the same as a regular query except a different method is called within the replica. It is compatible with batching as well.
How do I enable CORS and other HTTP features?¶
Serve supports arbitrary Starlette middlewares and custom middlewares in Starlette format. The example below shows how to enable Cross-Origin Resource Sharing (CORS). You can follow the same pattern for other Starlette middlewares.
Note
Serve does not list
Starlette as one of its dependencies. To utilize this feature,
you will need to:
pip install starlette
from starlette.middleware import Middleware from starlette.middleware.cors import CORSMiddleware client = serve.start( http_middlewares=[ Middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"]) ])
How do
ServeHandle and
ServeRequest work?¶
Ray Serve enables you to query models both from HTTP and Python. This feature
enables seamless model composition. You can
get a
ServeHandle corresponding to an
endpoint, similar how you can
reach an endpoint through HTTP via a specific route. When you issue a request
to an endpoint through
ServeHandle, the request goes through the same code
path as an HTTP request would: choosing backends through traffic
policies, finding the next available replica, and
batching requests together.
When the request arrives in the model, you can access the data similarly to how you would with HTTP request. Here are some examples how ServeRequest mirrors Flask.Request:
Note
You might have noticed that the last row of the table shows that ServeRequest supports
Python object pass through the handle. This is not possible in HTTP. If you
need to distinguish if the origin of the request is from Python or HTTP, you can do an
isinstance
check:
import flask if isinstance(request, flask.Request): print("Request coming from web!") elif isinstance(request, ServeRequest): print("Request coming from Python!")
Note
Once special case is when you pass a web request to a handle.
handle.remote(flask_request)
In this case, Serve will not wrap it in ServeRequest. You can directly
process the request as a
flask.Request.
How fast is Ray Serve?¶
We are continuously benchmarking Ray Serve. We can confidently say:
Ray Serve’s latency overhead is single digit milliseconds, often times just 1-2 milliseconds.
For throughput, Serve achieves about 3-4k qps on a single machine.
It is horizontally scalable so you can add more machines to increase the overall throughput.
You can checkout our microbenchmark instruction to benchmark on your hardware.
Does Ray Serve use Flask?¶
Flask is only used as a web request object for servable to consume the data. We actually use the fastest Python web server: Uvicorn as our web server, alongside with the power of Python asyncio. Flask is ONLY the request object that we are using, Uvicorn (not flask) provides the webserver.
Can I use asyncio along with Ray Serve?¶
Yes! You can make your servable methods
async def and Serve will run them
concurrently inside a Python asyncio event loop.
Are there any other similar frameworks?¶
Yes and no. We truly believe Serve is unique as it gives you end to end control over the API while delivering scalability and high performance. To achieve something like what Serve offers, you often need to glue together multiple frameworks like Tensorflow Serving, SageMaker, or even roll your own batching server.
How does Serve compare to TFServing, TorchServe, ONNXRuntime, and others?¶
Ray Serve is framework agnostic, you can use any Python framework and libraries. We believe data scientists are not bounded a particular machine learning framework. They use the best tool available for the job.
Compared to these framework specific solution, Ray Serve doesn’t perform any optimizations to make your ML model run faster. However, you can still optimize the models yourself and run them in Ray Serve: for example, you can run a model compiled by PyTorch JIT.
How does Serve compare to AWS SageMaker, Azure ML, Google AI Platform?¶
Ray Serve brings the scalability and parallelism of these hosted offering to your own infrastructure. You can use our cluster launcher to deploy Ray Serve to all major public clouds, K8s, as well as on bare-metal, on-premise machines.
Compared to these offerings, Ray Serve lacks a unified user interface and functionality let you manage the lifecycle of the models, visualize it’s performance, etc. Ray Serve focuses on just model serving and provides the primitives for you to build your own ML platform on top.
How does Serve compare to Seldon, KFServing, Cortex?¶
You can develop Ray Serve on your laptop, deploy it on a dev box, and scale it out to multiple machines or K8s cluster without changing one lines of code. It’s a lot easier to get started with when you don’t need to provision and manage K8s cluster. When it’s time to deploy, you can use Ray cluster launcher to transparently put your Ray Serve application in K8s.
Compare to these frameworks letting you deploy ML models on K8s, Ray Serve lacks the ability to declaratively configure your ML application via YAML files. In Ray Serve, you configure everything by Python code.
How does Ray Serve scale behave on spikes?¶
You can easily scale your models just by changing the number of replicas in the BackendConfig. Ray Serve also has an experimental autoscaler that scales up your model replicas based on load. We can improve it and welcome any feedback! We also rely on the Ray cluster launcher for adding more machines.
Is Ray Serve only for ML models?¶
Nope! Ray Serve can be used to build any type of Python microservices application. You can also use the full power of Ray within your Ray Serve programs, so it’s easy to run parallel computations within your backends.
|
https://docs.ray.io/en/master/serve/faq.html
|
CC-MAIN-2020-50
|
refinedweb
| 1,303
| 67.55
|
AS923 Class C not working?
I ran into a problem trying to use class C on AS923.
Class A works like expected, but when I change to class C, I cannot receive anything. The joining procedure works fine.
This is my declaration:
global lora lora = LoRa(mode=LoRa.LORAWAN, power_mode=LoRa.ALWAYS_ON, region=LoRa.AS923, device_class=LoRa.CLASS_C)
socket and Hello message:
pycom.rgbled(0x141400) # create a LoRa socket global lsock lsock = socket.socket(socket.AF_LORA, socket.SOCK_RAW) lsock.setsockopt( socket.SOL_LORA, socket.SO_DR, 2 ) # datarate lsock.setblocking(False) #send Hello packet print('sending Hello') lsock.setsockopt(socket.SOL_LORA, socket.SO_CONFIRMED, 1 ) model = 0 if 'model' in settings.dic: model = settings.dic['model'] send( bytes([ CMD_HI , 3, settings.PROTO_VERSION, settings.SW_VERSION, model ]) )
This is how I wait for a message:
lsock.settimeout(10) lsock.setsockopt( socket.SOL_LORA, socket.SO_CONFIRMED, 0 ) print('waiting for data') while(True): try: data, port = lsock.recvfrom(128) proto_handler_multi(data, port) except TimeoutError: send(bytes([0x01,0x00])) #print('still waiting...') continue
The gateway (a CloudGate) receives the message and sends messages when I want to but I don't see anything happening on the LoPy. Anyone ran into this issue before?
Can you tell what is wrong with this code then:
from network import LoRa import socket import binascii import settings import sys import time import pycom LORA_FREQUENCY = 923000000 LORA_FREQ_ADD = 20000 LORA_GW_DR = "SF10BW125" LORA_NODE_DR = 2 #.AS923, device_class=LoRa.CLASS_C) # create an OTA authentication params dev_eui = binascii.unhexlify('70b3d5499da97f9c')') # set the 3 default channels to the same frequency (must be before sending the OTAA join request) print('Adding channels') lora.add_channel(0, frequency=923200000, dr_min=0, dr_max=5) #join a network using OTAA print('Joining...') lora.join(activation=LoRa.OTAA, auth=(dev_eui, app_eui, app_key), timeout=0, dr=LORA_NODE_DR) #lora.join(activation=LoRa.OTAA, auth=(dev_eui, app_eui, app_key), timeout=0) # wait until the module has joined the network while not lora.has_joined(): time.sleep(2.5) print('Not joined yet...') print('Joined') # remove all the non-default channels print('Removing other channels') for i in range(1, 16): lora.remove_channel(i) # create a LoRa socket s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) # set the LoRaWAN data rate s.setsockopt(socket.SOL_LORA, socket.SO_DR, LORA_NODE_DR) #test set unconfirmed. s.setsockopt(socket.SOL_LORA, socket.SO_CONFIRMED, 0) # make the socket blocking s.setblocking(False) time.sleep(5.0) pkt = b'PKT #' + bytes([i]) print('Sending:', pkt) s.send(pkt) time.sleep(4) print('Listening...') while True: rx, port = s.recvfrom(256) if rx: print('Received: {}, on port: {}'.format(rx, port)) time.sleep(1)
The files have been sent! Thanks for looking into this! I will update this thread once the code is fully functional.
@harald. Please can you send your complete code to support (support@pycom.io) ? Class C in the AS923 region works. Please see the pic below:
@jcaron said in AS923 Class C not working?:
I have to admit I don't even know how channel/data rate selection works for class C downlinks... I suppose there must be default channels, and some MAC commands to change that?
Channel plan is the same for Class A and Class C devices, difference is how downlink delivered. Class A devices listen continously on RX2 channel after RX1 window closed followed by uplink, so you can send downlink any time. Class C is also used for Multicast but as far as I know it is not implemented by PyCom.
@harald You may want to check the region-related files in the source code to see if there's anything obvisouly different on the topic for AS923.
I have to admit I don't even know how channel/data rate selection works for class C downlinks... I suppose there must be default channels, and some MAC commands to change that?
What type of join are you using? OTAA or ABP?
I am Using a CloudGate and create the packets on the gateway itself. I can see the packets being sent, but nothing received on the LoPy. When I changed to EU HW and changed my code to EU868, it worked normally.
When I send from the LoPy to the gateway, I also see the packets of my LoPy.
Also Class A functions normally. It looks like it's not listening in Class C when using AS923. How can I confirm this?
What lorawan server are you using ? You can see downlink packets on your lora gateway ?
|
https://forum.pycom.io/topic/3867/as923-class-c-not-working/7?lang=en-US
|
CC-MAIN-2021-17
|
refinedweb
| 733
| 52.26
|
Kentico fixes reported bugs within 7 business days by releasing a hotfix. (Learn more about our 7-day bug-fix policy). Please only use hotfixes if you are experiencing a major bug, as the hotfixes do not represent fully tested code. If using v6 or later, we recommended you use the Kentico Installation Manager to apply hotfixes. We regularly release hotfixes every Friday, with possible exceptions in the case of serious bugs and around holidays.
The hotfixes are cumulative, meaning that the hotfix contains all the previous hotfixes for the same version. We recommend that you apply the latest hotfix available for the respective Kentico version you are using.
Staging service authentication using X.509 certificates did not work on instances hosted as an Azure App Service (the system worked with a different certificate store location than the one used by certificates imported into Azure).
When a marketing email containing the 'IsInPersona' macro was sent to a contact group, the macro always returned a 'True' value for all contacts in the contact group.
Payments using the default Authorize.Net provider could fail due to an exceeded maximum length of requests generated by the system (in cases where the payment data contained long parameters, such as the names of shipping options, etc.). Additionally, the system did not resolve localization expressions in the parameters of the sent payment data.
The '~/CMSModules/Content/FormControls/Documents/SelectPath.ascx' control's selection dialog did not work if the control was placed into the markup of a web form or user control and its 'EnableSiteSelection' property was set as an attribute.
When the 'URL selector' form control was configured to display the Media tab and a media library 'Starting path' was also specified, it was not possible to select media files from subfolders if the given library was mapped to Azure storage.
If a customer set both the shipping and billing address during checkout on a Portal Engine site (via a page containing the 'Customer detail' and 'Customer address' web parts), and then later returned to the given checkout step to update one of the addresses, the changes were not saved.
The system did not send password reset emails in cases where the user's email address matched the address of another user account that was disabled.
When utilizing a web farm setup together with the integration bus, a single integration task could be processed by multiple web farm servers (usually when the environment experienced heavy load). For example, this could result in duplicates of the processed object being created in the connected external system.
When the 'Allow switch sides' setting was cleared and a 'Relationship name' was specified in the advanced editing control settings of a page type field that used the 'Related pages' form control, the resulting field did not allow adding of related pages from sites other than the current site.
Double opt-in, subscription, and unsubscription confirmation emails were sent with 'low' priority, which could cause long delays for subscribers on instances that sent out a large number of other emails. After applying the hotfix, the priority of such confirmation emails is set to 'normal'.
The hotfix updates the Microsoft Translator Text API from Version 2 (V2) to Version 3 (V3), because V2 will be discontinued on April 30, 2019. In addition, the 'Speak' method of the 'MicrosoftTranslatorService' class, which could be used in custom code for text-to-speech functionality, is no longer supported after applying the hotfix.
When using URL shorteners to process links in text posted to social media, the application consumed excessive resources on the server (CPU) if the link URLs contained certain special characters. The problem also occurred when calling the 'URLShortenerHelper.ShortenURLsInText' method in custom code.
When a page with a workflow applied contained an 'Editable text' web part, the latest version of the web part's content was not displayed in 'Preview' mode when viewing child pages which inherited the original page's content (via page nesting and the 'Page placeholder' web part).
When a page with an associated SKU was under a workflow, modified fields of the SKU that contained ID values (such as the 'SKUDepartmentID' field) were not staged correctly if the IDs were different between the staging servers, but the 'NodeSKUID' field was identical.
When editing forms on a Portal Engine site via the 'Form builder' tab in the 'Forms' application, removing or cloning of fields did not work if the field's 'Label' value contained an apostrophe (single quote) character.
If an email widget property used the 'Macro editor' form control, context specific objects were not available in the macro autocomplete feature and 'Insert Macro' dialog. It was still possible to enter such objects manually.
The 'Uni selector' form control did not save selected items correctly if the returned value (determined by the control's 'Return column name' setting) contained special characters. The problem occurred in selection modes that utilize a dialog, such as 'Multiple'.
The 'FormFieldRenderingConfiguration.GetConfiguration' event added as part of the form builder markup customization API introduced in hotfix 12.0.14 was incorrectly invoked in certain scenarios. After applying the hotfix, the event is only triggered for forms rendered by the 'Form' widget. All documented customization scenarios remain unaffected.
For users with an associated customer, setting the 'First name', 'Last name' or 'Email' property to an empty value incorrectly cleared the corresponding value for the customer entity. These are required fields for customers, so this type of synchronization caused an invalid state. After applying the hotfix, only non-empty name and email values are synchronized from users to customers.
When using custom form components in the configuration dialog for page builder widget properties, scrolling functionality was incorrectly disabled. As a result, form components with scrollable elements (e.g. advanced drop-down options) did not work when used to edit widget properties.
When calling the 'modalDialog' JavaScript function in custom client code within the administration interface, the function's 'otherParams' parameter was ignored in certain cases (in locations where the system opened an advanced modal dialog). As a result, developers could not control parameters such as the resizability of the opened dialog.
When running a campaign on an MVC site, the value of the 'utm_content' parameter used in the campaign's links was not logged correctly for conversions or displayed in the campaign's reports.
The 'ResourceStringInfoProvider.TranslationExists' method returned an incorrect result in certain cases (after the system's cache was cleared).
An error occurred when attempting to select a file in the 'Linked file' property of the 'Javascript' web part if another file was already specified.
When using the Advanced export feature for contacts in the 'Contact groups' application with the 'Export raw database data' option selected, it was not possible to select custom contact fields for the export.
Processing of requests containing a query string parameter without a value, such as '?param', could result in an error in certain scenarios. For example, the errors could occur for requests that loaded files and other resources.
The 'SKU selector' form control did not work if its 'Allow multiple choice' setting was enabled.
If the product variant editing form (i.e. the 'Variant properties' alternative form of the 'SKU 'class) was customized to display the 'Image' field (SKUImagePath), the field's default 'Product image selector' form control did not correctly save information about uploaded image metafiles. This resulted in incorrect behavior, for example when displaying or staging the variant and its image.
Tabs displayed by the 'Tabs layout' web part were not hidden correctly in certain cases when their content was empty, even when the web part's 'Hide empty tabs' property was enabled. For example, the problem occurred if a tab contained a Repeater web part with an empty data source and the 'Hide if no record found' property enabled.
If a report had parameters with defined validation rules, the validation did not work when the report and its parameter filter were displayed on a website page using a reporting web part or widget.
When deleting a linked page from the content tree in the 'Pages' application, it was not possible to select an alternative page to which old URLs could be redirected.
The hotfix introduces additional API that enables more extensive markup customization options for forms built using the 'Form builder' feature. See the hotfix instructions for details.
The 'Seznam' search engine defined in the 'Search Engines' application had an obsolete domain configured in its 'Domain rule' property. As a result, visitors from the Seznam search engine (seznam.cz) were not being tracked accurately. After applying the hotfix, the system correctly tracks all visitors that access a site from the 'Seznam' search engine.
Authentication of requests to the Kentico REST service failed if the provided password contained the colon character (':').
If an MVC widget or form component was registered with an identifier containing a certain suffix (e.g. matching a blocked IIS extensions such as '.resources' or '.sitemap'), an error occurred when the item was added to the page or form builder.
When publishing an MVC live-site application (e.g., via the Visual Studio 'Publish' wizard), the publishing process did not copy certain .NET Resource (.resx) files. This resulted in unresolved resource strings in parts of the published application. The problem occurred when using versions 12.0.1 to 12.0.12 of the 'Kentico.AspNet.Mvc' NuGet package. From package version 12.0.13, all necessary resource files are copied during the publishing process.
The system allowed invalid characters as part of the 'Name' property of form fields (adjustable via the Properties tab of the MVC Form builder). After applying the hotfix, the 'Name' property must begin with a letter or an underscore ('_') character and may contain only letters, numbers, and additional underscore characters.
All event attendees stored for an event, represented as a page of the 'Event (booking system)' page type, were removed when one of the page's culture versions was deleted. After applying the hotfix, event attendees are removed only after the deletion of the event's last remaining culture version.
Calling the 'GetPage' method in the Index action of an MVC widget without any properties defined resulted in an error when the widget was displayed.
The hotfix removes the 500 character restriction placed on the 'Text area' form component for the MVC Form builder. After applying the hotfix, the character limit is by default set to the maximum number of characters allowed by the underlying database column. However, note that this change is only reflected in form fields created after the hotfix was applied. See the hotfix instructions for details.
The system did not send confirmation emails to recipients who unsubscribed from a single email feed of the 'Email campaign' type. Additionally, confirmation emails were incorrectly sent in certain cases after unsubscribing from all email feeds (email campaigns and newsletters), which is not intended behavior.
If a tax exemption for customers was created by registering a custom 'ICustomerTaxClassService' implementation, it was only applied for products with a tax class that had the 'Zero tax if tax ID is supplied' property enabled. After applying the hotfix, the property no longer affects custom tax exemptions (unless checked in the code of the custom implementation).
If the default CSRF security token functionality was disabled using the 'CMSEnableCsrfProtection' web.config key, custom 404 error handling pages assigned through the 'Page not found URL' setting were not displayed when a POST request targeted a non-existing URL (by default the standard IIS 404 page was displayed instead).
Widgets or sections that utilized actions other than 'Index' (for example the submit action of the default 'Form' widget) did not work correctly in certain scenarios. The problem could occur if the MVC application's route collection did not contain a general route with a controller and action parameter, or if a different route with a custom controller and the 'Index' action matched the page builder URLs.
Page builder and preview functionality did not work on pages whose controller and action was accessed through another action using an MVC redirect method (for example 'RedirectToAction').
A validation error occurred when attempting to save a field with the 'Form field selector' form control if the control's 'Field data type' setting was set to the 'All' option.
After installing or updating the 'Kentico.AspNet.Mvc' NuGet package, the 'CMSApplicationModule' module in the MVC project's web.config file did not contain the 'preCondition' attribute, which could have a negative performance impact on the application. Versions 12.0.10 and newer of the package ensure that the preCondition is correctly set to 'managedHandler'.
If multiple staging tasks were synchronized in a single batch, and the synchronization failed for one or more of the tasks, the entire batch remained in the task list (including tasks that were already successfully processed).
A move operation on a subset of pages under an Azure search index redundantly updated all pages in the corresponding index. This could result in very long indexing operations on sites with a large number of indexed pages.
Processing of requests to virtual paths defined by the Microsoft ASP.NET Web Optimization Framework, such as JavaScript or CSS bundles, resulted in an error (null reference exception). The errors occurred only for requests handled by the Kentico web project (not in MVC applications using the Kentico API). introduces the 'CMSUseStrictInventoryManagement' web.config key, which you can enable to prevent the system from creating such orders. If you enable the key and have an MVC site or Portal Engine site with custom checkout components, you need to ensure that your custom code handles the resulting 'InvalidOperationException' and displays appropriate information to customers.
When a page with an associated SKU was synchronized with the 'Publish from' field set to a future date, fields of the SKU were not staged correctly (except the name and description fields).
Synchronizing pages with an associated product (SKU) could break the relationship between the page and the product on the target server (in cases where the IDs of the given SKU were different between the staging instances).
The 'DataItemCount', 'IsFirst()' and 'IsLast()' transformation property and methods did not work correctly for data returned by the smart search (for example in transformations used by the 'Smart search results' web part). After applying the hotfix, the property and methods return the correct values for the currently displayed page of results.
The system did not allow users to manually recalculate a persona after a new rule was added for the persona in the 'Personas' application.
When selecting or uploading an image file in certain types of media selection dialogs (for example in a page field using the 'Media selection' form control), resizing of the image with a locked aspect ratio did not work correctly.
The 'Order by' property of the 'Selector' UI web part did not work, and also could not be set through the properties of UI elements that used the 'Listing with general selector' page template. After applying the hotfix, custom UI elements based on this template can now have their selector order by value configured through a new property.
The 'DefaultValue' property of the 'EditingComponent' attribute did not initialize form components (e.g., in forms or widget properties dialogs) with the specified default value. After applying the hotfix, the 'DefaultValue' property correctly sets a form component's default value when necessary.
When multiple content editors attempted to save pages under a workflow in the Pages application, a deadlock could occur in certain cases.
If an existing page type inherited fields from another page type, and a new field or category was added to the parent, the position of the new field in the inherited type could be incorrect (the order was not adjusted according to the inherited type's own additional fields). After applying the hotfix, such new fields are always added directly below the inherited field that precedes the new field in the parent page type.
The 'Default value' of page type fields was always loaded in the editing form, even for existing pages that had a different value specified. Saving such forms could cause users to make unintended changes in the page data. The problem was introduced by applying hotfix 12.0.5. However, applying hotfix 12.0.6 reverts an older bug fix, and prevents the default value from being applied for the following system page fields: DocumentInheritsStylesheet, DocumentShowInSiteMap, DocumentMenuItemHideInNavigation, DocumentIsArchived, DocumentUrlPath, DocumentWildcardRule and DocumentPriority.
If the page builder was initialized in a controller of a page located in an MVC Area, an error was displayed instead of the content on the live site and when previewing the page.
When erasing personal data from the system in the 'Data protection' application on the 'Right to be forgotten' tab, data subject identifiers (e.g. an email address) that contained certain special characters, such as '+', were not processed correctly, which could result in data not being removed.
The 'Users data source' web part did not order data correctly if the 'ORDER BY condition' property contained multiple columns with different order directions (ASC or DESC keywords). The last order keyword was incorrectly used for all columns.
Created Azure search indexing tasks were processed synchronously, which could result in an unresponsive user interface (e.g., when manipulating indexed pages in the content tree). After applying the hotfix, created Azure search tasks are processed asynchronously in one-minute intervals (if not customized otherwise).
The external Windows service for running scheduled tasks did not release allocated memory correctly in certain cases, which resulted in high memory consumption.
If a macro expression was added into the 'Default value' of a page type field with the 'Required' flag enabled, certain types of macros, for example {% EditedObject %}, were not evaluated correctly and returned a null value when creating new pages of the given type.
If an MVC site was configured to convert URLs to lower case (by setting the 'RouteCollection.LowercaseUrls' property to true in the code of the related MVC project), errors occurred in certain parts of the page builder and form builder interface, for example the widget property configuration dialog.
The Import Toolkit utility did not reflect application keys in the web.config file of the related Kentico project. For example, this caused incorrect behavior when importing data with continuous integration enabled and a custom repository path configured in the target project's web.config. Additionally, serialization of continuous integration data was incorrectly performed when running a simulated import of data in the utility. To fix the issues, the hotfix must be applied to the Setup files (switch to advanced mode in the hotfix utility).
The 'Process domain prefix' setting was not taken into account when tagging links in marketing emails with UTM parameters. If the domain prefix in an email link's URL was different from the prefix in the main domain set for the site, the given link was not tagged with the specified UTM parameters.
Updating or assigning page categories caused indexing tasks for Azure indexes of the 'Pages' type to fail if the index was newly created and not yet rebuilt, or if the subset of the content tree to be indexed, as specified on an index's 'Indexed content' tab, did not yet contain any pages.
A license limitation error was logged for license editions lower than EMS when working with MVC widgets in the page builder. After applying the hotfix, such errors only occur if there are personalization condition types registered in the system (which require an EMS license).
When an advanced workflow containing an asynchronous step (e.g., the 'Wait' or 'Send email' step) was applied to a page in a staging environment, changes to the page past the asynchronous step were not logged into the selected staging task group.
When indexing page attachments, errors caused by invalid Unicode surrogate pairs in PDF files terminated the indexing operation. Since such invalid surrogate pairs can occur in otherwise valid PDF files, the pairs are now stripped during the indexing process.
When sending newsletters, the "License for feature 'NewsletterABTesting' not found" error was logged in the event log and the newsletters were not sent on sites with lower than EMS licenses.
The 'Chat support request' web part did not render correctly in certain cases (e.g., on 404 error pages).
The system disregarded all multi-factor authentication validity interval customizations (via overriding the 'ClockDriftTolerance' property).
The 'ORDER BY expression' field was not taken into account when displaying related pages using the 'Repeater' web part. The default order of the related pages was always displayed.
When indexing page attachments, errors caused by malformed attachment content (e.g., invalid Unicode characters) displayed insufficient debugging information. After applying the hotfix, the error message contains the ID and name of the attachment causing the exception.
Certain operations with products could lead to SQL deadlock errors on sites with the 'Kentico CMS Base' or lower license editions.
When accessing forum groups belonging to a specific group on the 'Forums' tab of the 'Group' application, more strict permissions than necessary were required.
When setting the 'Subsidiary of' field on the 'General' tab of an account in the 'Contact management' application, the system did not preserve the account selection if the parent account was selected via the '(more items…)' dialog window.
Activities of the 'Form submission' type were logged with an incorrect 'Activity URL' value on content-only (MVC) sites. After applying the hotfix, such activities are logged with the URL of the page displaying the given form.
When logging new events into the event log, the system did not delete old events according to the limit specified in the 'Event log size' setting.
On Kentico EMS instances hosting multiple sites, subscriber data was processed incorrectly when automatically merging contacts who subscribed to newsletters from different sites. This could lead to marketing emails not being sent to subscribers and loss of subscriber data in some cases.
Due to an error in the Microsoft.Web.Services3 library, it was possible for a specially crafted request on staging service to bypass the initial authentication and proceed to deserialize user-controlled input. The deserialization of the user-controlled input then led to remote code execution on the server where the Kentico instance was hosted.
Workaround for all Kentico versions
The workaround for this issue is the same for all projects, regardless of staging utilization - set the 'Staging service authentication' setting to 'X.509':
1. Navigate to 'Settings' -> 'Versioning & Synchronization' -> 'Staging'
2. Under the 'Staging service' section set 'Staging service authentication' to 'X.509'
3. 'Save' the changes
Applying hotfix 11.0.39 or newer introduced a change in the e-commerce API, which could cause undetected broken functionality for sites with a customized tax calculation process. After applying hotfix 11.0.47, such cases now clearly result in a runtime and compilation error. Any custom code that prepares 'TaxCalculationResult' objects can no longer use the setter of the 'TotalTax' property, and must instead set the new 'ItemsTax' and 'ShippingTax' properties.
When indexing page attachments, errors caused by malformed attachment content (e.g., invalid Unicode characters) displayed insufficient debugging information. After applying the hotfix, the error message now contains the ID and name of the attachment causing the exception.
If certain drop-down selector form controls (e.g. the 'Uni selector' in 'Single drop down list' selection mode) were placed into a form that was displayed in a dialog, such as the web part configuration dialog, and the field's settings also used an 'Enabled condition', clicking the '(more items...)' option in the list did not work correctly and the additional selection dialog was not opened.
When writing custom code that obtained a shopping cart object for an existing order using the 'ShoppingCartInfoProvider.GetShoppingCartInfoFromOrder' method, the cart's 'OrderDiscount' property was not set and always returned 0 (until the shopping cart was recalculated by calling its 'Evaluate()' method).
The 'Contact has agreed with consent' macro rule was not evaluated correctly in certain types of conditions (for example in marketing automation process triggers), and always returned a false value.
When importing a page type or custom table on an instance where the given object did not exist yet, role permissions configured for the page type or custom table were not imported.
Authentication failed when signing in to a website through the 'Facebook Connect logon' web part (a JavaScript error occurred due to changes in the Facebook SDK).
An error occurred on pages that displayed product details using an ASCX transformation containing the 'ShoppingCartItemSelector' control, if the control's 'UnavailableVariantInfoEnabled' property was enabled and the displayed product did not have any defined variants.
Local search indexes did not work when running Kentico as a scaled out Azure Web App with the 'CMSSharedFileSystem' web.config key enabled (this key was introduced in hotfix 11.0.23).
Attempting to publish a page under a workflow after restoring it from the recycle bin worked incorrectly. This happened only if the workflow was applied to an existing page after its creation.
If the 'Membership reminder', 'Report subscription sender' or 'Users delete non activated user' scheduled tasks were imported within a package from an older version, the given tasks could not be executed due to an incorrect assembly and class name.
When using the general export feature of listings in the administration interface (export to Excel, CSV or XML files), text data containing special characters, such as diacritics, could be malformed in the exported files.
Due to changes in the Facebook API and updated security requirements, the initial Facebook authentication and page publishing functionality in Kentico no longer works. To use the features, you need to apply the hotfix, and manually set 'Valid OAuth redirect URIs' for your Facebook app, and ensure that it has the required permissions via the Facebook App Review. See the hotfix instructions for details.
On sites using an Azure Search index, updating a page that had the 'Exclude from search' option enabled (on the 'Properties -> Navigation' tab of the Pages application) resulted in a failed indexing task, which blocked further processing of Azure Search tasks (until the failed task was manually deleted).
A warning message about not saved changes was displayed after editing and saving a page field using the 'Uni selector' form control (on the Form or Content tab of the Pages application). The warning message was displayed even when all changes were correctly saved.
If a multilingual page used an ad-hoc page template shared by all culture versions, deleting a culture version of the page also permanently deleted the page template (this caused the remaining culture versions to display blank content). After applying the hotfix, templates shared by multiple culture versions are deleted only after deleting the last culture version of a page.
Output caching did not work correctly on the pages of MVC sites for registered users due to unnecessary cookie operations performed by the system. The problem affected users whose 'Preferred user interface culture' was set to '(default)', for example newly registered users.
Certain macros related to email marketing did not take A/B testing variants of emails into account. For example, this could lead to incorrect evaluation of conditions that used the "Contact has opened marketing email" macro rules.
Smart search indexes of the 'Pages crawler' type used incorrect URLs for pages of content only page types, which prevented content from being indexed (for example on MVC sites).
When adding system page fields to a page type (fields with the 'Field type' set to 'Page field'), the 'Default value' was not applied in the resulting editing form for certain system fields, for example 'DocumentMenuItemHideInNavigation'.
If a contact was already subscribed to a newsletter with double opt-in enabled and attempted to subscribe again after the double opt-in interval had expired, the system did not inform them about the existing subscription. Similarly, calling the 'IsSubscribed' API method of the default 'ISubscriptionService' in custom code incorrectly returned a false value in these cases.
When a page field was edited on the Form tab of the Pages application with a value that did not meet the requirements of a validation rule, repeated submission of the data (e.g., moving to the next workflow step) incorrectly resulted in successful validation (while the original data was submitted).
If a scheduled task was configured to run only on specific days of the week, the 'Next run' time was calculated incorrectly under certain circumstances.
On sites running in a web farm environment, duplicate copies were sent out for a portion of newsletter or email campaign emails in certain cases.
Resizing of attachment images according to device profiles did not work correctly. Resizing was performed according to the device profile active when the image was requested for the first time. The result was cached and incorrectly served for all other profiles until the cache expired.
Payments using the default PayPal provider failed if the site was configured to include tax in prices. If you have customized the tax calculation process by creating your own 'ITaxCalculationService' implementation, you need to manually update your code after applying the hotfix. When preparing 'TaxCalculationResult' objects, set the new 'ItemsTax' and 'ShippingTax' properties instead of the original 'TotalTax'.
If the 'Documents' macro was used together with the 'Columns' macro method, the returned pages did not contain coupled data columns of specific page types (even when the 'WithAllData' property was added to the Documents collection, and the given columns were specified in the 'Columns' method).
The 'RelatedDocuments' property available for page objects in macros did not work correctly (the macro is used to retrieve a collection of all pages related to the given page through a relationship with the specified name).
An unhandled error occurred when creating fields with a data type for which no form control was available (e.g. 'Binary' type fields in custom module classes). After applying the hotfix, the error no longer occurs in these cases and the 'Form control' selector is disabled. However, it is still necessary to implement a custom form control if you wish to display fields of the given type in forms.
The primary or secondary contact assigned to an account was removed if the corresponding contact was merged with another contact (for example with a new anonymous contact).
When a field using the 'reCAPTCHA' form control was added to a form, the resulting HTML code was invalid (a <span> tag containing <div> elements). After applying the hotfix, the rendered <span> is replaced by a <div>.
The 'HTML5 input' form control only accepted integer (whole number) values when configuring the 'Max', 'Min' and 'Step' attributes. After applying the hotfix, other types of values, such as decimal numbers or dates, can be saved into the attributes.
In special cases, switching between sites in the header of the administration interface could cause an error (stack overflow exception) and a possible site crash. The problem occurred only on instances with customizations performing certain types of actions within handlers for the user update event.
Licenses were not loaded correctly when using cultures with certain calendar types (for example the Persian calendar).
Page aliases containing wildcards were processed in an incorrect order in certain cases. For example, if a page had two aliases with paths like '/page/{param}' and '/page/{p1}-{p2}', accessing the URL path '/page/value1-value2' resulted in the first alias being selected instead of the second (the value of the 'param' parameter was 'value1-value2', and the 'p1' and 'p2' parameters were not set). After applying the hotfix, you need to resave all page aliases where this problem occurs.
After enabling SCAYT (Spell Check As You Type) functionality in the editor, the options dialog (languages, dictionaries, etc.) did not work correctly.
On sites running in a web farm environment, content personalization conditions based on the visitor's persona were not evaluated correctly in some cases, which caused the incorrect personalized content to be displayed.
If the serialization of an object by the continuous integration solution failed because the resulting file's absolute path exceeded the maximum limit of 260 characters, the system logged a 'PathTooLongException' error into the event log without any additional debugging information. After applying the hotfix, the error message contains the absolute path of the file.
When storing smart search indexes on a shared file system (for example Azure Blob storage or on instances deployed to Azure Web Apps), the index files could become locked if an application restart occurred while building or updating an index. This blocked further index operations, such as index rebuilds. After applying the hotfix, the system is able to automatically resolve most scenarios related to locked index files. If your system already contains index lock files created before applying the hotfix, you need to manually delete them from the file system.
The 'Sales' and 'Number of orders' e-commerce reports displayed an incorrect "Total" value when filtering with a specific 'To' date was used.
On instances with a database hosted on SQL Server 2008 R2, an error occurred when selecting a query in the configuration dialog of a custom query web part (for example via the 'Query name' property of the 'Repeater with custom query' web part). The issue only occurred after applying hotfix 11.0.12 or newer.
When using the Kentico integration API for Salesforce in custom code, a FormatException error occurred after executing a SOQL query that returned an empty value for a field.
When using a selection dialog to select items with a specified filter (e.g., adding applications filtered by name to the default system dashboard for individual roles via the 'Roles' application), the filtering operation was also applied to the existing selection of items, often causing the loss of a portion of the selected items.
The menu providing data export options did not work in the 'Contact demographics' report accessible from the 'Email marketing' and 'Campaigns' applications.
In an environment with multiple sites running on a single domain, an error occurred when a user tried to create a new email feed in the 'Email marketing' application.
The 'Country selector' form control did not display the state selector element if used for a field in a form that was placed into an editable text area through the 'On-line form' inline widget.
When displaying coupon codes added to a customer's shopping cart, it was not possible to adjust the appearance of codes that are no longer valid (for example after the cart's total price falls under the value required by the coupon code's discount). After applying the hotfix, coupon code transformations provide an 'IsApplied' data property, which can be used to evaluate whether codes are still valid.
Users without the administrator privilege level encountered an 'Access denied' error when attempting to edit hierarchical transformations from the web part configuration dialog in the Pages application, even if they had sufficient permissions for the 'Design' and 'Content' modules.
After attempting to save a domain alias with a domain value that was already used by another alias (on the same site or another site), the edited site was stopped.
When using the Advanced export feature for page aliases in the 'View all aliases' dialog in the Pages application, an error occurred if the 'Current page only' option was disabled.
When an administrator performed actions while impersonating another user, entries created in the system's event log did not contain the administrator's original name (only the name of the impersonated user).
Sending of marketing emails failed if monitoring of bounced emails was enabled and the sender name configured for the email feed or specific email contained a comma character.
Users without the administrator privilege level were not allowed to manually create or edit coupon codes for gift cards, even with sufficient permissions for managing gift cards.
The API incorrectly allowed 'Union', 'UnionAll', 'Intersect' and 'Except' operations to be used with IMultiQuery objects (most commonly 'MultiDocumentQuery' objects returned when retrieving pages of multiple types). These operations are not supported for such objects and generated incorrect queries. After applying the hotfix, such operations result in a "not supported" exception.
By default, the geolocation feature uses MaxMind's GeoLite or GeoIP Legacy Databases, which will be discontinued in the future. The hotfix allows you to manually integrate the newer GeoIP2 Databases. To do this, you need to apply the hotfix, then install the 'Kentico.Geolocation.Update-v11' NuGet package, and add the required database files into your web project. See the hotfix instructions for details.
'Unsubscribe from newsletter' action of the 'Newsletter subscription' step in Marketing automation did not work correctly.
Payments using the default PayPal provider failed if the order had an applied Buy X Get Y discount or Product coupon discount with a certain type of additional condition (for example a discount available only for registered users).
When a URL path value of the 'Route' type was modified for a page, the changes were not synchronized correctly between web farm servers.
When using the Advanced export feature for the activity log in the 'Contact management' application, an error occurred if the 'Export raw database data' option was enabled and columns that are not included in the activity list by default were selected.
Default values of email widget properties were incorrectly applied whenever empty values were saved to the properties.
When performing page content tree actions in a modal dialog (for example moving a page), the interface could be confusing after switching to the listing mode that shows all sub-pages of a specific page. After applying the hotfix, a notification message is displayed to inform the user about the change of listing mode and to enable them to get back to the original listing.
If content personalization was enabled, adding a widget with the 'Skip initial configuration' option enabled on a page under workflow could cause other widgets in the given zone to disappear after the page was saved.
Email widgets did not reflect the selected UI culture when storing their properties which could lead to an error when editing the widget. After applying the hotfix, you need to manually re-save the configuration of affected widgets in your emails.
The system incorrectly resolved URLs of videos inserted via the 'YouTube video' web part or widget. The issue only occurred after applying hotfix 11.0.22 or newer.
When clicking the 'Clear' button next to the 'In roles' and 'Not in roles' fields in the advanced user search on the 'Users' tab of the 'Users' application, the textbox field was not cleared.
Translations.com submission could fail when a page was submitted for translation into multiple cultures, but was already translated into some of the target cultures (even though the 'Skip already translated pages' checkbox was selected).
The system did not accept values of the floating-point number (double) type that contained a digit group separator (thousands separator), but did not have a decimal part. The problem occurred only after applying hotfix 11.0.22 or newer.
Errors were generated in the system's event log when web bots, such as search engine crawlers, processed pages containing certain types of selector components (for example a country selector).
Searching for personal data in the 'Data protection' application was inefficient. Applying the hotfix improves the performance of the personal data search.
Code generated for page types, custom tables, forms or module classes was invalid if the given object had a field with a description containing newline characters. Saving the resulting code could lead to an error on the site (the code could not be compiled).
Redundant web farm synchronization tasks were being created and processed in environments where multiple web farm instances shared a single file system (e.g., when running Kentico in Azure Web Apps). This could lead to unwanted side effects, e.g., when synchronizing smart search indexes. The hotfix introduces a new 'CMSSharedFileSystem' web.config key that notifies web farm instances they are operating over a shared file system and configures them accordingly. See the hotfix instructions documentation page for more details.
If the '~\CMSPages\PortalTemplate.aspx' page was customized by adding the Async="true" attribute to the 'Page' directive, resolving of resource strings in page content did not work correctly.
If the Target framework of a Kentico web project was set to .NET Framework 4.7.2, a compilation error occurred due to an ambiguous reference (ToHashSet method). The error could also occur after installing .NET Framework 4.7.2 if live site debugging was enabled on the live site.
The access denied page of the administration interface did not work correctly when running on certain types of domains, and a generic 403 error was displayed instead when a user attempted to access an administration page without the required permissions.
Links to pages inserted into consent text in the 'Data protection' application did not always reflect the culture selected in the 'Language of consent' selector.
When the 'Cookie law and tracking consent' web part was placed on a page and the 'Default cookie level' setting was set to 'System', users were repetitively signed out from the administration interface after attempting to view this page in the Pages application.
When running continuous integration configured to not exclude any object types, or using the object blacklist (i.e. having objects specified under the '<ExcludedObjectTypes>' element in the 'repository.config' file) and restoring a new custom table type already containing some data, the corresponding data was restored only after the second time continuous integration was run. After applying the hotfix, new custom table definitions together with their corresponding data are restored in a single run of the continuous integration application.
Adding a site-specific role to a user on the Roles tab in the Users application caused all global roles to be removed for the given user.
The processing of on-line marketing activities could malfunction and log errors in the event log under certain circumstances.
Evaluation of form field validation rules could fail or lead to errors in special cases.
If a field using a text area form control (e.g. 'Text area' or 'Rich text editor') had a specified maximum text length, the system logged macro security warnings in the event log when the text length was exceeded in the resulting form, even if the text did not contain any macros.
Merging of contacts who subscribed to newsletters from different sites caused data inconsistencies, which could lead to errors when sending the newsletters.
If object locking of page templates was enabled, templates containing the 'Customer address' or 'Customer detail' web part could not be checked in while displaying web part content (on the Design tab of the Pages application).
Macros placed in the code of transformations could become invalid in special cases after re-signing macros in System -> Macros -> Signatures. The problem occurred if the macro expression contained certain characters (e.g. '<' or '>'), and object versioning was used for transformations, for example when undoing check-out or restoring an older version of a transformation.
An error occurred when restoring continuous integration files if the data contained an object with a field representing an optional reference to another object, and the referenced object was deleted (for example, the 'ItemCreatedBy' field of custom table data items referencing a user object). After applying the hotfix, such objects are restored successfully with a null value in the given reference field.
Code generated for page types, custom tables, forms or module classes was invalid if the given object had a field of the Text or Long text type with a default value containing certain characters (for example newlines or quotation marks). Saving the resulting code could lead to an error on the site (the code could not be compiled).
When selecting a layout of a widget in the 'Widgets' application, an error occurred if there was a large number of available layouts and the '(more items...)' option was selected.
Discount coupon codes could be applied more than once during checkout in certain cases.
Effective May 18th, 2018, the LinkedIn API will no longer work with the original OAuth 1.0 implementation in Kentico. The hotfix updates the system to use OAuth 2.0 authentication for LinkedIn company management and authentication functionality. After applying the hotfix, you need to add appropriate 'Authorized Redirect URLs' for your application in the LinkedIn developer portal, and also 'Reauthorize' all LinkedIn company profiles in your 'LinkedIn' application in Kentico. See the hotfix instructions for details.
The 'Log on-line marketing activity' property was reset for pages under workflow every time a new version of the page was created.
Sending of emails failed and an error was logged in certain cases if one of the email address field values (e.g., email recipients) ended with a semicolon character (typically used as a separator between addresses). For example, the problem occurred when sending form email notifications.
When a user without the Global administrator privilege level searched for users in the 'Users' application, the results incorrectly included users who were not assigned to the current site.
If the names of products, option categories, product options, or product variants were localized using resource strings, the names were not resolved in the order data sent to PayPal when making payments through the default PayPal gateway. After applying the hotfix, localized product names are resolved into the content culture that was active for the customer during checkout.
Due to an incorrect permission check, users with the 'Start process' permission for the 'On-line marketing' module could not start automation processes.
When continuous integration was enabled, certain operations that affected a large number of related objects caused the system to generate extremely complex SQL queries, which could lead to incorrect behavior or logged SQL server errors. For example, the problem could occur after changing the 'Page alias' of a page.
When working with an instance of 'CKEditorControl' in custom code, setting the control's 'EnterMode' or 'ShiftEnterMode' properties did not correctly adjust the HTML output generated by the resulting editor.
Due to a misconfiguration of the system, POST requests missing a CSRF token leading to nonexistent pages on sites with a configured 'Page not found URL' setting resulted in a 'CSRF' exception being logged in the 'Event log' application. After applying the hotfix, the system logs a standard 'PAGENOTFOUND' exception and returns the server's default 404 response.
Opened emails and links clicked within those emails were not being tracked when the recipient's email address contained a plus ('+') sign.
Applying the hotfix changes the license limitations on the maximum number of enabled products. The limit is removed completely for 'Kentico CMS Ultimate' license editions and increased to 500 products for 'Kentico CMS Base' editions.
The expiration time of output cache items was incorrect for pages under workflow that had a 'Publish to' value specified (in cases where the 'Publish to' time occurred later than the expiration time determined by the 'Cache minutes' set for the output cache).
Staging tasks were not logged for changes made in the 'Settings' application.
If a page with output caching enabled was accessed with UTM parameters in the URL query string, a license limitation error occurred if the site had a lower license edition than EMS.
An error occurred when retrieving SharePoint list data that did not contain the 'Title' field.
Processing of search indexing tasks for custom table data failed if the primary key column of the related custom table had a different name than 'ItemID'. Such custom tables can be created via the 'Use an existing database table' option in the custom table creation wizard.
On instances using Windows authentication, an error was logged into the system's event log if a visitor's first request targeted a resource handler that used a read-only session (for example an attachment or media file URL).
Form fields without a specified 'Default value' that had the 'Required' property enabled incorrectly contained a default value when using certain form controls. For example, if a required field of the 'Decimal number' data type used the 'Text box' form control and had no 'Default value' specified, the field's value was '0' in the resulting form.
The selection dialogs for queries and transformations incorrectly displayed the page type or custom table prefix twice in the names of the listed transformations or queries (for example when selecting a transformation in the configuration of a listing web part).
If a macro that accessed the current shopping cart was added to a condition which the system resolved during the shopping cart calculation process, an infinite loop could occur, leading to a stack overflow error and possible server crash. For example, the problem could be triggered by using the 'ECommerceContext.CurrentShoppingCart' macro within the condition of a discount. Applying the hotfix prevents such errors. However the 'ECommerceContext.Current*' macros are not intended for use in shopping cart calculation conditions (e.g. discounts) and may still work incorrectly - use the Context specific macro objects instead.
If a customer selected an existing address during checkout (via the 'Customer address' web part), another copy of the given address was created for the customer.
When importing a package from an older version that contained form, custom table, page type or module class objects, the field definitions and database structure of the given objects could be overwritten even if the objects were not selected during the import. Additionally, an error occurred in some cases when importing packages with this type of data.
The original implementation of the 'reCAPTCHA' form control will stop working after March 31, 2018. The hotfix updates the control to use reCAPTCHA v2 (allows users to prove they are human simply by clicking a checkbox). Please register your site again at, select the 'reCAPTCHA v2' type, and copy the new API keys into the corresponding Kentico settings.
When viewing stand-alone SKUs in the 'Products' application, an error was displayed above the list and the product data was not loaded correctly. The problem occurred only after applying hotfix 11.0.3 or newer.
The 'SharePoint data source' web part did not correctly handle multiselect fields with complex types. The hotfix ensures that the data source processes such fields into strings consisting of the type's properties separated by semicolon (;) characters, and individual entries separated by newline characters (environment specific). You need to manually parse the string within the transformations applied to the data source's output, for example: 'String.Split(Eval<string>("MultiselectField"), Environment.NewLine).Select(l => String.Format("{1} ({0})", String.Split(l, ";")))'
If the 'Allow preview mode on the live site' setting was disabled, certain widget dashboards in the administration interface reloaded constantly (for example the 'Dashboard' page in the 'Web analytics' application).
The hotfix adds support for app secret proof parameters when communicating with integrated Facebook apps. If your website uses the Facebook page integration features or Facebook authentication, we strongly recommend applying the hotfix and then securing your Facebook app. Configure your app at and enable the 'Require App Secret' option in the 'Settings -> Advanced' section.
The synonym search functionality of locally stored search indexes did not work (when using the 'Any words or synonyms' search mode).
Marketing emails were incorrectly sent to contacts labeled as 'Undeliverable', if this status was reached after a marketing automation process set the number of bounced emails for the contact.
Logging of web analytics did not work correctly for cultures using a year numbering system different from the Gregorian calendar.
If a form contained a field named 'DisplayName' and another field using the 'User selector', 'Multiple user selector', 'User name selector' or 'Community group members selector' form control, the value of the DisplayName field was always overwritten to '##USERDISPLAYFORMAT##' when the resulting form was saved. The issue occurred for all types of forms (fields of page types, custom tables, online forms, etc.).
A licensing error occurred in certain cases when using the API to get the current contact on a site with a lower license edition than EMS.
If multiple pages had URLs with wildcards and the URLs had at least one wildcard in common (for example '/Articles/{topic}' and '/Articles/{topic}/{date}'), the system returned a 404 Page Not Found error when the page with the more specific URL was requested.
When using the Advanced export feature for recipients in the Email marketing application, the values of certain columns (e.g. 'Newsletter subscription', 'Receiving marketing email') were not exported correctly if the number of records was very large (more than 1000).
A runtime error occurred after installing the 'Kentico.Libraries.Web.UI' NuGet package into an external project. To resolve the issue, apply the hotfix to the related Kentico project and database, and then update the package in your external project to the corresponding version (11.0.6 or newer).
If the 'Field description' text of a page type field contained a macro that accessed the property or value of a different field, the description was not displayed correctly (in the field's tooltip) after a page was saved on the Form tab in the Pages application.
An error could occur on pages containing the 'Event calendar' web part when running under heavy load.
On upgraded Kentico instances, an error occurred when editing an email feed with A/B tested emails that were created before the upgrade, i.e., on previous Kentico versions.
When importing a package containing marketing emails with A/B testing applied, the variants of the email were duplicated.
An error occurred when viewing the 'Activities' tab of a specific contact in the Contact management application. The problem occurred only after applying hotfix 11.0.3 or newer.
Code files generated for all page types on a site were created in a different location than the one specified. This could prevent the user from finding and working with the given code files.
Moving a set of over 500 pages with the 'Remember original URLs when moving pages' setting enabled did not create the appropriate page aliases for some of the pages.
Paging of contacts did not work in the 'Contact demographics' report accessible from the 'Campaigns' and 'Email marketing' applications.
When checking for available hotfixes and upgrades in the Kentico Installation Manager utility using the 'Check' function, the tree containing available updates displayed major versions in an incorrect order. To fix the issue, the hotfix must be applied to the Setup files (switch to advanced mode in the hotfix utility).
When attempting to import certain object types (for example pages) using the Kentico Import Toolkit utility, an error occurred in the source data preview step ("Cannot load default column mappings. Class not found.").
The 'reCAPTCHA' form control did not work in environments using an HTTPS proxy (SSL offloading). The control incorrectly generated HTTP URLs instead of using HTTPS.
Tracking of clicked links did not work in page-based newsletters due to incorrect generating of tracking URLs. This issue only occurs on instances upgraded from Kentico 10 with hotfix 11.0.3 or later applied.
On sites with multiple content cultures, the 'Related pages' web part displayed the default culture version for untranslated related pages, even if the 'Combine with default culture' setting was disabled.
On sites that had the 'Combine with default culture' setting disabled, page aliases with the Culture set to '(all)' did not work for pages that were not translated in the site's default culture.
When using Amazon S3 for file storage, file access errors could occur on the live site in rare cases. The Amazon S3 storage provider incorrectly opened files in a way that prevented concurrent access.
When checking for available hotfixes and upgrades in the Kentico Installation Manager utility using the 'Check' function, an error occurred if the registered instances included at least two different major versions. To fix the issue, the hotfix must be applied to the Setup files (switch to advanced mode in the hotfix utility).
The 'HTML5 input' form control generated misspelled 'minlength' and 'maxlength' attributes, which resulted in invalid output. The misspellings were also present in the form control's configuration interface.
Editing of properties did not work for inline widgets placed into a rich text field of a content only page (the opened configuration dialog was blank).
The REST service limits the length of request data in JSON format to 2097152 characters (4 MB of Unicode string data) by default, which is not sufficient in certain cases (for example when creating objects with large attachments). After applying the hotfix, the character limit can be adjusted using the 'CMSRestMaxJsonLength' web.config key.
Viewing a preview link in a private window or a browser where the preferred culture (cookie) was not set resulted in the access denied page being displayed, even if the page was publicly accessible. The problem only occurred when opening the preview link for the first time.
When using the Advanced export feature for the activity log in the Contact management application, an error occurred if the 'Export raw database data' option was enabled and columns that are not included in the activity list by default were selected.
It was not possible to configure and send existing page-based (dynamic) newsletters on upgraded instances due to incorrect backward compatibility.
When editing product pages under workflow, the changes were displayed on the live site before they were published in certain cases (for example product names or prices displayed in shopping cart content). Applying the hotfix also disables inline editing of prices in product lists for products under workflow (inline editing is not compatible with product workflow).
Payments using the default PayPal provider failed if the purchased items contained a free product obtained by fulfilling the conditions of a Buy X Get Y discount.
The E-commerce API contained the 'ShoppingCartItemParameters.Price' property, which was no longer necessary and incorrectly used the 'double' type. After applying the hotifx, the property is marked as obsolete.
When creating or editing fields of custom module classes, the system incorrectly offered the 'File' data type. This data type is not intended for use with module class fields.
When attempting to hotfix a project that was set to use .NET 4.7.1, an error occurred in the hotfix utility, stating that the .NET Framework version could not be determined. Hotfixes 11.0.2 and newer support all build versions of .NET 4.6 and 4.7.
Payments using the default PayPal provider failed for customers who specified their phone number during checkout.
If a selection dialog contained a filter with another object selector that offered a sufficiently large number of items, the '(more items..)' option did not work.
When a user submitted an on-line form which contained a field with the 'Consent agreement' form control, the system displayed recorded data for the form in the 'Forms' application as if the user gave an agreement with the specified consent, even when the user revoked the consent agreement.
The content of a visitor's shopping cart was lost if the 404 error page was loaded or a resource on the current page returned a 404 not found error, and the error page itself or its master page accessed the 'EcommerceContext.CurrentShoppingCart' object.
An error occurred when restoring continuous integration files if the data of a custom table was added to the '<IncludedObjectTypes>' whitelist in the repository.config file (using an object type with the 'customtableitem.' prefix) and the corresponding custom table definition (cms.customtable object) was not yet created in the target database. The problem typically occurred when attempting to restore a new custom table definition together with its data.
<p>Added security improvements to the application.</p>
When saving a new culture version of a product page that was created based on the content of another culture version, with the 'Save the new page before editing' option disabled, the values of any modified product fields were overwritten by the original values of the source culture version.
The output filter for resolving relative URLs did not work in special cases for websites running under heavy load.
Custom fields added to the Contact class under a custom category were not displayed correctly in the contact editing form. In general, the problem was caused by incorrect transferring of fields to alternative forms in cases where the original form and the alternative form contained a different category structure.
On instances that were upgraded from Kentico 9, the User class had its 'Overwrite existing contact information' option disabled by default. As a result, updates of user values were not transferred correctly to the corresponding contact. Applying the hotfix enables the option (if the On-line marketing settings are not customized for the User class in the Modules application).
Only the first property value configured for inline widgets was saved correctly for properties using certain form controls that store values into multiple fields (for example the 'Report graph selector' form control).
An error occurred on pages containing the 'On-line users' web part in special cases when the 'Store on-line users in database' setting was disabled.
If the image field configured in the search settings of a page type or object stored absolute image URLs (for example images served from a CDN), the images were not displayed in the search results.
An error occurred when a user with the Editor privilege level clicked the 'View all aliases' button while creating or editing a page alias (on the URLs tab of the Pages application).
If the 'Country selector' form control was configured to return the ID of the selected state for an integer type field, the state selector in the resulting form did not filter the displayed states based on the selected country in certain cases (states from all countries were loaded).
An error occurred when recipients of a particular newsletter email feed were exported using the 'Advanced export' action with the 'Export raw database data' option and 'Email' column selected.
When cloning a page template, the template's scopes were cloned even if the 'Clone page template scopes' option was disabled in the cloning dialog.
An error occurred when cloning a form that contained a field with a reserved SQL keyword in its name (for example "function").
For product pages using workflow or versioning on a site with multiple content cultures, the values of the 'Product name', 'Short description' and 'Description' fields were not saved correctly when editing a product page in a non-default culture.
The serialized data created by the continuous integration solution for certain types of bindings (M:N relationships between objects) had an inconsistent order. The different data order caused unnecessary changes in the CI repository files (even when the binding data remained the same). To fix the problem, you need to apply the hotfix and then serialize all objects again in the Continuous integration application. The update may cause a large number of changes in existing CI data for bindings.
When importing contacts from a CSV file, the comparison of email address values was case-sensitive. This could cause the import to incorrectly create duplicate contacts instead of updating existing contacts with a matching email address.
If a folder in a media library had a name that matched the name prefix of other folders in the same location, renaming the folder corrupted the file paths of media files contained in the other folders.
The values of properties configured for inline widgets were not saved correctly for properties using a form control that stores values into multiple fields (for example the 'Attachment field selector' form control).
When using workflow or versioning for pages, macro expressions that were stored within the data of a page version and contained certain characters (for example '&' ampersands) became invalid after re-signing macros in System -> Macros -> Signatures. As a result, such macros could not be resolved when displaying content from a page version (for example in preview mode) or after rolling back to an older page version.
When a widget with a property using the 'Media selection' form control was inserted into a page field on the Form tab in the Pages application, the media selection dialog opened when setting the property's value did not display the Attachments tab.
If an empty value was saved into a widget property with a text data type, the property's default value was always loaded instead when the widget's configuration dialog was re-opened.
When using multiple Kentico applications with a shared file system and enabled web analytics, file lock errors could occur when processing the analytics log files. If you have such an environment, you also need to manually ensure that the file system is configured as shared after applying the hotfix (see the Hotfix instructions for more information).
If a site was configured to always use URLs without a trailing slash together with certain other combinations of settings from the 'URLs and SEO' category, the Google sitemap generated by the system still contained URLs with a trailing slash. This caused unnecessary URL redirections.
The system did not trigger any global events when creating objects by directly inserting data into a database table (for example when a user imported contacts from a CSV file). After applying the hotfix, developers can perform custom actions before or after such actions by assigning a handler to the new 'SqlEvents.BulkInsert' event.
The 'Data.
When viewing the details of a page version on the 'Versions' tab in the Pages application, information about editor widgets placed on the given page was not displayed (in the 'DocumentWebParts' field). The problem occurred only after applying hotfix 10.0.41.
Contacts were created or updated when submitting on-line forms, even though the 'Enable on-line marketing' setting was disabled.="" />
Manual synchronizing of scheduled task objects using the 'Synchronize current subtree' action did not work (on the Objects tab of the Staging application).
The 'SearchResultUrl' and 'GetSearchImageUrl' transformation methods returned empty values when used to display results for smart search indexes of the 'Custom index' type. The problem occurred only after applying hotfix 10.0.37 or newer.
When comparing two versions of a page, the information about editor widgets placed on the page (the 'DocumentWebParts' field) was switched between the two versions, i.e., the newer version listed the widget content of the former version and vice versa.
If the application started under heavy load and the 'runAllManagedModulesForAllRequests' modules attribute was enabled in the web.config, initialization was not performed correctly in special cases, which could cause errors with the following message: "You cannot change the default type of the generated objects after some objects were created by the generator."
If a customer's shopping cart contained a product that was added automatically as a result of a Buy X Get Y discount, and an application restart occurred on the server, the customer then received an error (stack overflow exception) when viewing any shopping cart-related pages. The problem occurred only after applying hotfix 10.0.24 or newer.
If the 'Country selector' form control was configured to return only the ID of the selected state for an integer type field, an error occurred in the resulting form when attempting to load and display the selected values for the field.
Selection dialogs for objects that can be either site-specific or global did not set their values when global objects were selected (for example the 'In roles' and 'Not in roles' selectors in the advanced search filter of the Users application). The problem occurred only after applying hotfix 10.0.35.
When using Text / XML transformations, it was not possible to access the values of data fields whose name contained characters conflicting with the Kentico macro syntax (for example hyphens in element names when using an XML data source). After applying the hotfix, such fields can be accessed in transformations using the 'DataItem' macro object and indexing, for example: {% DataItem["field-name"] %}
When an object was restored from the recycle bin, or modified by rolling back to an older version or undoing checkout, the resulting staging task did not contain information about the user who performed the change and was not categorized under the staging task group that was active when the change occurred.
When using an external file storage provider for media libraries (for example Azure Storage), an error occurred while displaying media library content in the 'Insert image or media' dialog if the 'Content -> Media -> Use permanent URLs' setting was disabled and the selected library contained more folders than allowed by the 'Max subfolders' setting.
When using the 'Kentico.Libraries.Web.UI' NuGet package in an external project, required third-party libraries were not copied to the correct output folder when building the project.
An error occurred when viewing the 'Page template selector' form control on the 'Preview' tab in the Form controls application.
When staging pages (with enabled versioning without workflow) in an environment where the page IDs were not identical on the source and target server, non-required page fields that had an empty value were not synchronized correctly.
Searching using smart search indexes of the 'Custom index' type did not work. The problem occurred only after applying hotfix 10.0.37.
When resolving the ##WHERE## expression within the code of queries, the system removed adjacent newline characters in special cases, which could result in an incorrect query (for example if the SQL code also contained a single-line comment).
The values of the '_index' and '_type' system fields within smart search indexes were not converted to lower case, which could lead to incorrect search behavior when using these fields in search queries and filters. To ensure that searches filtered according to these fields work correctly, you need to rebuild your search indexes after applying the hotfix.
When creating new pages in the Pages application, an error occurred when uploading a non-image file (pdf, docx, etc.) into a field of the 'File' data type, if the system was configured to store files in the file system and had the integration bus enabled.
When monitoring of on-line users was enabled, the system did not correctly clear expired sessions for anonymous users (guests). As a result, the number of displayed on-line users included guests who were no longer present on the website.
When a visitor reduced their allowed cookie level by clicking the 'Deny all' or 'Allow specific' button provided by the 'Cookie law consent' or 'Simple cookie law consent' web part, already existing cookies that no longer belonged to an allowed level were not invalidated in the visitor's browser.
Selection of multiple items in UniSelector dialogs was processed inefficiently, which could cause the selection to fail in certain cases.
Icons in the administration interface displayed incorrectly in the Google Chrome and Opera web browsers when using built-in developer tools of the browsers.
After application start, Azure Blob storage containers were not initialized correctly under certain circumstances, leading to an error on initial requests.
Mass email feature in the 'Users' application did not work properly on sites with Kentico CMS Base or Free license.
When editing memberships in the 'Membership' application, the 'Products' tab did not display product options (of the 'Products' type) that were associated with the given membership.
Content personalization on a page containing the 'Page placeholder' web part did not work on child pages of the given page. Default variants of the personalized content were displayed instead of the personalized variants.
When creating installation packages for custom modules, the system generated NuGet package files with a name containing an underscore. Such packages could not be installed using Visual Studio 2017. After applying the hotfix, module package files are generated with names in format '<modulename>.<version>.nupkg'.
When viewing the macro report in System -> Macros -> Report, each page in the list of macro expressions displayed one more item than allowed by the selected page size number.
When using Microsoft Azure Storage for the project file system or individual folders, the system logged insufficiently detailed error messages in cases where the storage account values were misconfigured.
New user accounts created via the 'CMSAdminEmergencyReset' web.config key did not have the Global administrator privilege level (when using the key to recover administrator access).
When setting up pages in the Facebook application, the 'Authorize' action always failed (due to changes in the Facebook API used for access token retrieval).
When using a database hosted on Microsoft Azure SQL, an error occurred when separating or rejoining the on-line marketing database (instead of displaying information about the additional manual steps required for Azure databases).
If localization expressions were added into the page title of a page (on the Properties -> Metadata tab in the Pages application), the system always resolved the value into the site's default culture.
When using the import feature to update an existing form, certain parts of the form's configuration were not overwritten correctly (for example the form field definition and alternative forms).
If a custom setting category under a custom module contained at least 11 setting groups and 11 or more setting keys in the first group, an error occurred when viewing the category in the Settings application.
Error messages related to the installation of NuGet module packages were misleading in certain cases. The error message was improved for cases where the module metadata contained multiple versions of the same module (this could occur for projects created using append-only deployment).
When viewing pages in on-site editing mode, localized text displayed by widgets was always in the site's default culture.
An error occurred when running the Hotfix utility on a system where only version 4.7 of the .NET Framework was installed (without any older versions).
When modifying or removing the security settings of a page (ACLs) for a user or role with content staging enabled, the system logged the staging task for the change incorrectly (or not at all), and an error was displayed in the event log.
When importing classes with query objects, either as part of a custom module or a customizable class (system table), the query import did not work correctly and an import error occurred if one of the class queries already existed on the given instance.: 'Eval<string>("MultiselectField").Split(new[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries)'
When adding a new page alias containing a wildcard, the uniqueness validation incorrectly failed if the alias's URL path was a less specific version of an already existing URL path with wildcards..").
When using the 'Payment method selection' web part on checkout pages, changing the payment option did not correctly trigger a refresh of other related content on the page. For example, order discounts applied based on a payment method condition were not immediately displayed by 'Shopping cart totals' web parts when the payment method selection changed.
When editing an object with a very large amount of dependent objects, the continuous integration serialization process could take a long time. Applying the hotfix optimizes the evaluation of object dependencies, which results in improved performance.
Due to an update of the Bing Maps API, the original Bing Maps web parts no longer work after June 30, 2017. Applying the hotfix updates the web parts to use the new Bing Maps V8 API. You may also need to update the transformation used to display location marker (pushpin) infoboxes, and perform additional manual steps if using custom pushpin icons.
If the 'Use server processing' property was enabled for Bing Maps web parts, addresses specified through location fields were not resolved into coordinates and corresponding location markers were not displayed on the map.
The validation of folder names in media libraries was not sufficient for certain types of names containing reserved file system keywords ('aux', 'com1', etc.). Such folders did not work correctly and could lead to errors.
An error occurred (MissingMethodException) when resolving macro methods that modify collections, such as 'OrderBy' or 'Where', if the collection contained Order or Page attachment objects (OrdersCollection, DocumentAttachmentCollection). For example, the error occurred for macro expressions such as: {% ECommerceContext.CurrentCustomer.Orders.OrderBy("OrderDate DESC") %}
When importing page attachments using the Import toolkit, the attachments were always created with a new GUID value, even if a source with existing GUIDs was used for the AttachmentGUID column.
When importing data for certain types of objects (e.g. attachments) in the Import toolkit, the final steps showing the import log and number of imported objects contained misleading data in some cases.
When the value of the 'Bounced email limit' setting was set to 0 (which means there was no limit for bounced emails), an error occurred on the 'Recipients' tab of newsletters in the Email marketing application.
When using continuous integration, the system did not correctly evaluate whether an update of a field definition required re-serialization of the dependent objects (for classes, page types, etc.). This resulted in unnecessary serialization that could take a very long time if the instance contained large numbers of objects of the given type. Additionally, the hotfix improves the performance of the file system repository optimization that occurs after each serialization or restore process.
When generating classes for page types, custom tables or forms on the 'Code' tab of the appropriate editing interface, the application timed out in special cases. The problem occurred if the given object contained a field whose name started with the object's code name, followed by a number (for example, a page type with the code name 'Custom.Article' and a field named 'Article3D').
The 'Display to roles' and 'Show for page types' system properties of widgets had a smaller 'Size' (number of allowed characters) than the corresponding properties of web parts. The size was increased to 1000.
Domain alias redirection did not work if the target URL had the same domain as the alias, only with an additional WWW prefix.
Management of forum and newsletter subscriptions in the 'My profile' and 'Users' applications did not work correctly.
When using a separated on-line marketing database, errors related to missing User-Defined Table Types could occur in the administration interface of on-line marketing applications (for example when viewing the 'Recipients' tab of a newsletter) or when calling the on-line marketing API in custom code.
When re-signing macros in System -> Macros -> Signatures with an 'Old salt' value specified, macro expressions containing open conditions or loops did not preserve the user name in the macro signature. The original name was incorrectly replaced by the name of the user performing the re-signing process.
If a project was upgraded from Kentico version 9 or older without installing the 'Microsoft.CodeDom.Providers.DotNetCompilerPlatform' NuGet package, a compilation error occurred after applying hotfix 10.0.23 or 10.0.24.
A 'BadImageFormatException' error occurred during the application start if an assembly containing native code was present in the CMS web project's 'bin' folder (for example after installing certain types of NuGet packages). 'Precision' value.
The web part configuration dialog did not work correctly when creating new web part layouts on the Layout tab. Additionally, the web part layout editing interface behaved incorrectly after making changes in the Preview mode (both in the web part configuration dialog and the Web parts application).
Validation did not work correctly for form fields that used the 'Rich text editor' form control and were set as 'Required'. Values could not be saved more than once if the content only consisted of void HTML tags (for example an iframe or image tag without any other content).
Performing certain types of shopping cart content changes could cause the cart to lose data that was not yet saved to the database (for example a discount applied via a coupon code was removed after updating the unit count of a product in special cases).
In certain cases, 'ShippingAddress' macro expressions returned an empty value for shopping carts or orders that used the billing address for shipping. After applying the hotfix, the macro always returns the billing address for order and shopping carts that do not have a different shipping address specified.
Attempting to perform HTML validation of HTML5 pages resulted in an error.
When configuring the XML definition of UniGrid components, the 'hideifnotauthorized' attribute of 'action' elements did not work.
In certain cases, staging of published pages containing attachments resulted in an error.
Automatic processing of search indexing tasks was not triggered correctly in certain cases (for example when indexing changes of object code names). This could cause search tasks to remain unprocessed in the queue until another task was logged.
The Conversion detail report did not filter data correctly according to the specified 'From' and 'To' dates.
When using macro expressions within marketing automation action steps placed after a 'Wait' step in the process, the 'CurrentSite' value is not available in the macro context. The hotfix adds a new 'ActivitySiteID' macro property, which you can use for automation processes with a 'Contact performed an activity' trigger (for example Abandoned shopping cart processes). The macro resolves into the identifier of the site where the trigger activity occurred.
When setting the 'CssClass' attribute for 'FormControl' tags within custom form layouts of the 'ASCX' type, the specified CSS class was not rendered in the form's output.
When working in 'Select' dialogs (for example after clicking 'Add roles' while editing a user on the Roles tab in the Users application), selecting all objects listed on a page via the header checkbox incorrectly cleared the selection of objects on other pages.
If a validation error occurred for a field in a form using a custom layout of the 'ASCX' type, the system did not add the 'Error' CSS class to the <div> tag containing the invalid field's input element.
When running multiple sites with the 'Use site prefixes for user names' setting enabled and the 'Require unique user emails' setting disabled, the forgotten password functionality generated password reset links for the wrong user in certain cases.
The 'Limit of related pages' setting of fields using the 'Pages' form control incorrectly limited the total number of related pages across all pages of the given page type, instead of only limiting the related pages for individual pages.
In certain cases, the system did not track conversions from links in marketing emails sent via the 'Send marketing email' Marketing automation step.
When restoring a custom binding class with a compound primary key from the continuous integration repository, the class's database table was not created correctly and had missing columns.
Errors that occurred as a result of the system's CSRF protection feature were difficult to identify in certain cases. After applying the hotfix, the related error messages provide more accurate information.
An SQL error occurred when adding a field to a page type if continuous integration was enabled and the system contained a very large number of pages of the given type (over 2000).
When adding resource strings into localizable fields (for fields using the 'Localizable text box' or 'Localizable text area' form control), the Save & Close action in the Localize string dialog did not work if the selected string contained multiple lines of text in the current culture.
If a recipient did not confirm a 'Newsletter' subscription with double opt-in within the set 'Double opt-in interval' ('Setting -> On-line marketing -> Email marketing'), they became unable to subscribe to that newsletter in the future.
If an E-product (a product with the 'Representing' property set to 'E-product') was created as a copy of another product, deleting of attachment files from the 'Files' field did not work correctly and the files remained in the database (COM_SKUFile table).
When uploading files via the 'MultiFileUploader' control (for example into File fields using the 'Direct uploader' form control), the file upload was denied in some cases and the system displayed an "unauthorized user" warning. The problem could occur on instances using Windows authentication while also having anonymous authentication enabled.
The 'Receiving marketing emails' column on the 'Recipients' tab of a 'Newsletter' incorrectly displayed the 'Opted out' status for users unsubscribed only from that single newsletter. After applying the hotfix, the 'Recipients' tab of a 'Newsletter' only displays the 'Opted out' status for users unsubscribed from all 'Newsletters'.
Macros in format 'Subscriber.Contact.<contact_property>' were not resolved correctly within the content of marketing emails for recipients who were added to a 'Newsletter' or 'Email campaign' as part of a contact group.
When a combination of the Pages data source (configured to retrieve multiple page types), Basic repeater, and Universal pager web parts was used with the 'Select top N pages' property, the resulting SQL query was generated incorrectly and caused an error.
In staging environments utilizing a chain of source and target servers with a circular topology, processed staging tasks persisted in the database of the last server in the server chain.
If a user submitted a form while having a non-default content culture selected, localization macros in the form's autoresponder emails were resolved into the site's default culture instead of the user's selected culture.
If a custom macro rule contained a boolean type parameter that was 'Required' and had a default value, an error occurred for users who left the default value while configuring the parameter in the 'Rule designer' dialog.
When using the REST service on instances with Windows authentication, domain user credentials provided in the basic authentication header of REST requests were not recognized correctly, which resulted in 401 Unauthorized responses.
Users granted permission to modify a page via page-level permissions ('ACLs'), but lacking the 'Modify' permission for the 'Content' module, were unable to modify the order of pages added to a page via the 'Pages' form control.
Heavy load on sites hosted on Microsoft Azure and utilizing the 51Degrees Premium integration caused site downtime in certain cases. After applying the hotfix, the 'Devices' selector on the 'General' tab of device profiles in the 'Device profiles' application no longer lists all devices from the 51Degrees Premium library.
On instances containing both an 'EMS' license and another license of the 'Ultimate' edition, certain types of actions could cause license limitation errors on the site using the domain with the EMS license.
If the customer's first or last name exceeded 100 characters, an error occurred when creating a new order.
On pages containing the Google maps web part, the 'SensorNotRequired' warning was being logged to the browser's console due to the inclusion of an obsolete 'sensor' parameter when communicating with the Google Maps API.
If a custom field was added to the SKU class (under the E-commerce module) and configured to be indexed by the smart search, the system did not update related search indexes correctly when the value of the field was changed for a product page.
When an existing contact submitted new information via an alternative form, their updated contact information (e.g. name, address, etc.) was not correctly merged.
Requests made through links in confirmation emails (for example password reset or unsubscription requests) were rejected as invalid in certain cases. The problem could occur if the culture context was different on the page where the request was generated and the page where the request was validated (only for certain cultures).
When using continuous integration to synchronize changes that add or remove fields of the default customizable classes, the restore operation did not refresh database views related to the affected tables. This could lead to errors.
When adding URLs starting with two forward slashes (i.e. protocol-relative URLs) into content managed by the WYSIWYG editor, the system processed the value incorrectly, which resulted in an invalid relative URL.
In certain cases, the Editable text widget could lose its content when moved from one widget zone to another.
Attempting to perform accessibility validation of a page resulted in an error.
When editing macro conditions in the 'Rule designer' dialog, rule parameters with date and time values were displayed incorrectly if the user's selected UI culture was different than 'English - United States'.
When using a custom field to store a search preview image for the CMS.File page type, the system did not properly retrieve the image and instead displayed the default placeholder image in search results in certain cases.
Copying of pages under workflow failed if the given Page type contained a custom field of the 'Unique identifier (GUID)' data type.
After changing the currency of an existing order on the 'Items' tab in the Orders application, discounts that were listed for the order disappeared in the user interface.
When creating categories in the field editor, the category disappeared from the list if the 'Category caption' contained a macro expression that returned an empty string or null.
In staging environments utilizing a chain of source and target servers (e.g. in a circular topology), staging tasks for the "Archive page" task type were not being logged on servers that received the task from another source server higher in the chain.
When a marketing automation process was initiated automatically, the 'Initiated when' value was not shown on the process's Contacts tab.
When processing serialized XML data with custom fields of the 'Decimal number' data type (for example during import, staging, or in the REST service), the system handled certain types of decimal values incorrectly if the culture context of the source data was different than on the target instance. After applying the hotfix, the system attempts to adapt the processing for all types of decimal culture formats. In rare cases this may result in an error, and you then need to fix the source data or as a temporary workaround add the <add key="CMSDisableDecimalSeparatorFix" value="true" /> key to your project's web.config file to return to the previous decimal processing behavior.
When going through the checkout process, an error occurred in some cases if the customer set a different shipping address than the billing address, and then reset back to the billing address after returning from the following checkout step.
The character encoding used when generating files in the 'CIRepository' folder can now be configured via the new 'CMSCIEncoding' web.config key. For example: <add key="CMSCIEncoding" value="utf-8"/>
The system incorrectly validated fields of the 'Long integer number' data type and did not accept values greater than 2*(10^18) or lower than -2*(10^18).
When a file was deleted from a field of the 'File' data type, the 'IfEmpty' transformation method incorrectly evaluated the field as not empty.
When using custom event handlers to automatically synchronize staging tasks, an error occurred after deleting a custom table data record.
When creating an object listing administration interface page for a custom module (using a template containing the 'Listing' UI web part), an error occurred on the page if the assigned Grid definition (xml file) contained an 'externalsourcename' column attribute whose value was not handled in an extender class.
Fields using the 'Uni selector' form control with the 'Selection mode' set to 'Multiple' did not report unsaved changes in the administration interface if a user navigated away from the editing form after removing items from the field.
If the 'Combine CSS from components' setting was enabled, the system added an unnecessary GetResource.ashx request to pages that did not contain any components with assigned CSS.
When deleting pages or objects on instances with multiple sites, the continuous integration solution removed serialized XML files in the 'CIRepository' folder for the wrong site in special cases.
When restoring 'Discount coupon' and 'Buy X Get Y coupon' objects to the database using the continuous integration solution, the current number of redemptions (coupon uses) was set as null instead of 0 and the coupons could not be applied.
When subscribing contacts (for example to a newsletter), an error occurred for contacts with a non-unique email address on instances without an EMS license.
When using the Copy all and Paste functionality to copy all web parts from a web part zone to a personalization variant of a different web part zone, an error could occur.
Altering the query string parameters of a URL leading to a non-existing page (causing a page not found redirect) made it possible for signed-in users with insufficient permissions to access unpublished data on the page not found error page.
When importing an inherited page type together with its parent to an instance where those page types already exist, changes in the parent page type were not reflected in its descendants in certain cases.
Some of the system's default pagination components (pagers based on the 'DataPager' control) generated HTML code that was not valid. The control's output code contained a doubled semicolon in the value of a 'style' attribute.
When rejecting a page under workflow, the Rejected email was not sent to the user who edited or submitted the page for approval in certain cases.
When adding a site using the New site wizard in the Sites application, the site was created incorrectly if the selected 'Site culture' was different than 'English - United States' (or if the system's 'Default content culture' setting was different when creating the site using a web template).
The 'Documents[]' macro expression did not return results (pages) according to the current content culture.
In certain cases, the system task for sending A/B tested email variants was not executed correctly on sites with low traffic. As a result, the A/B tested email variants were not sent.
An error occurred when a contact posted on a forum or subscribed to a forum post on sites with licenses other than EMS.
When building the administration interface for a custom binding class using the default 'Edit bindings' page template, an error occurred if the binding was defined between classes of the same type (for example M:N relationships between users).
When links in marketing emails contained HTML comments (for example <!--</a>-->), the links were not converted into tracking links in certain cases.
If a page had multiple aliases with wildcards in the URL path and the aliases had at least one wildcard in common (for example '/News/{number}' and '/News/{number}/{type}'), the system returned a 404 Page Not Found error when the page was requested through the more specific alias.
The initial invoices of new orders did not contain the invoice number.
The 'IndexLastRebuildTime' field was incorrectly included in the serialized data of search index objects when creating files in the continuous integration repository. This caused unnecessary changes, which were then tracked in the associated source control system.
The system incorrectly created new contacts on every request in cases where the visitor did not allow cookies (for example because of blocked cookies in the browser, or lack of consent on sites using the 'Cookie law consent' web parts).
License objects were not imported from packages that were exported on an instance with a lower hotfix version.
An error occurred after calling the 'DeleteMVTVariantInfo(variantGuid)' method in custom code when deleting MVT variants from pages.
When editing a workflow on the Pages tab in the Workflows application on instances with multiple sites, mass actions were performed for pages on all sites, even when a specific site was selected in the Site filter.
Changing the 'Web part control ID' property of a layout web part removed any content personalization or MVT variants created for web parts placed within the layout.
When using a custom jQuery UI library on live site pages, it was not possible to close modal dialogs in the on-site editing mode for pages based on Portal engine page templates.
Module classes that had the 'Can be customized' flag enabled were not included when creating installation packages for custom modules.
When editing a media library in the Listing mode in the Media libraries application, sorting the media files by size did not work.
Macro expressions with page collection (TreeNodeCollection) components were not resolved correctly on the live site for public users or other users with limited page permissions. For example: {% CurrentDocument.Children.FirstItem.AbsoluteURL %}
Macro expressions that were added in the 'Page title' property on the 'Properties -> Metadata' tab in the Pages application did not resolve correctly on the live site.
The macro report tool in System -> Macros -> Report did not check syntax validity correctly (except for macro rule expressions).
An error occurred when editing and saving images in applications that contain the Theme tab (e.g. Page layouts).
An error occurred when adding widgets containing the Path property to widget dashboards (e.g. My desk).
When using layout widgets on dashboards, the layout content was not saved correctly and disappeared after leaving or refreshing the dashboard page.
When serializing objects whose name starts with a reserved file system name ('aux', 'com1', etc.), the continuous integration solution did not adjust the name correctly in certain cases, which resulted in an error.
If the 'Custom subscription form' web part's control was added into the markup of an ASPX page template, an error occurred after submitting the subscription form on pages using the template.
When resubmitting pages based on the 'ASPX + Portal page' page template for translation, the editable content of the pages could be incorrectly translated in certain cases. For example, when the page template was modified before resubmitting the pages, the system did not distinguish between the IDs of the editable web parts and editable regions in the generated XLIFF files.
Due to a misconfiguration of the system's CSRF protection, an error ocurred when verifying payments that were made using the PayPal payment gateway. As a result, the payment was aborted and the order not marked as paid.
When running on Microsoft Azure, the smart search worker role could not index pages containing widgets. After applying the hotfix, the smart search worker role indexes widget content without the widget default values.
If the 'Check page permissions' setting was enabled or used as a parameter of the DocumentQuery API, the DocumentQuery could return different culture version of pages than expected. For example, this could cause that the page names were displayed in an incorrect culture in the content tree of the Pages application.
When the ID field of a form was not in the first position in the form builder, deleting a form record caused an error if the ID field was displayed on the Recorded data tab in the Forms application.
When editing image attachments of the pages under workflow in the Pages application and saving the images without making any changes, all variants of the edited images disappeared.
When the number of registered web farm servers exceeded the number allowed by the highest instance license, the Servers tab in the Web farm application was inaccessible.
If output compression was enabled, the system could return corrupted responses when serving static files (e.g., images or pages) from an IIS virtual directory located outside of the Kentico project folder.
When creating custom HTML layouts for forms, any <script> tags added to the layout code were not included in the output of the resulting form.
When editing contact groups in the Contact groups application, the percentage of total contact base was not displayed for the individual groups.
After creating a custom forum layout (in the 'CMSModules\Forums\Controls\Layouts' folder), the layout did not appear as an option in the 'Forum layout' property of forum web parts.
Using the 'Numeric up/down' form control for a field with the 'Has depending fields' option enabled caused an infinite refresh on the page containing the resulting form.
When using marketing automation to send email reminders about abandoned shopping carts, the restored shopping cart on the linked page did not contain products if the user was not signed in on the website.
Enabling the output debug could cause the application to return corrupted responses when serving static files (images or pages) from an IIS virtual directory with a physical path outside of the Kentico project files.
When using mixed mode Active Directory authentication, having an incorrectly configured or not responding Active Directory server caused an error when authenticating standard Forms users.
The 'Custom registration form' web part did not work correctly when its control was added into the markup of an ASPX page template.
Added security improvements to the application.
The 'Analytics browser capabilities' web part did not work and pages containing the web part generated logging requests that resulted in an error (CSRF exception). The problem occurred after applying hotfix 9.0.48.
Macros for loading component CSS did not work for transformations and web part layouts. For example: {% CSS.Transformations["custom.article.list"] %}
When the deletion of inactive contacts took longer than 1 minute, the next run of the 'Delete inactive contacts' scheduled task was not set, and the task did not execute again. To fix the problem, you need to manually execute the scheduled task after applying the hotfix.
An error occurred when using transformations with a dot character in their code name. For example, if the system fetched a transformation directly from the database, the transformation's code name was parsed incorrectly and caused an error.
Facebook insight data was not collected for pages assigned to Facebook apps using version 2.7 or newer of the Facebook API (i.e. apps created after July 13, 2016).
An infinite loop could occur when building page smart search indexes if the indexed data fields contained complex HTML or XML structures.
The 'Logic CAPTCHA' form control displayed the "(please enter the answer to the question or statement)" text even if its hidden 'ShowAfterText' property was disabled.
An error occurred after applying the subject filter on the 'Emails' tab when editing an email campaign in the Email marketing application.
Changes of license keys were not synchronized correctly between web farm servers, which could lead to logged errors in certain cases.
When sending emails from Kentico (for example in the Email queue application), images added to the email content from a media library with resized dimensions were inserted with a relative URL, which caused them to be unavailable when viewed in email clients.
Kentico instances installed from setup files with hotfix 9.0.40 or newer applied did not work (errors occurred due to missing assembly files).
If the restoring of continuous integration data to the database failed, it was difficult to diagnose the exact cause in certain cases. If the process fails during the composition of an object consisting of multiple parts, the error message now contains the file system paths of the related files.
After exporting and importing a page template containing the 'Output cache dependencies' web part, the keys specified in the web part's 'Cache dependencies' property were processed incorrectly and combined into a single invalid line.
Windows Active Directory authentication could cause an error if replacement of forbidden characters was disabled for roles via the 'CMSEnsureSafeRoleNames' web.config key. The error occurred if import of AD domain groups as roles was enabled and the authenticated user belonged to at least one group with a forbidden character in its name.
When using the search in customer selection dialogs (for example when manually creating new orders), the system only displayed customers with matching last names. After applying the hotfix, the search also uses the first name, company and email address customer fields.
When using certain external identity providers for authentication (for example Access Control Service), the system incorrectly handled situations where the identity provider returned an empty username claim. This caused an authentication loop for the client, which could result in the system generating multiple user accounts.
When using automatic web farm mode, servers were deleted from the system while restarting. As a result, the system did not create file synchronization tasks while the server was missing. After applying the hotfix, servers always remain in the system for 24 hours after shutting down (unless running on Azure Cloud Services).
Date and time values were adjusted incorrectly if the value matched the start or end interval of the active time zone's daylight saving time (after conversion to the server time zone). As a result, the saved time did not match the selected time.
If the 'Check page permissions' setting was enabled, certain pages in the content tree could be incorrectly hidden even though users had sufficient permissions to view the pages.
Due to changes in the Facebook API, an error occurred when a user attempted to sign in through newly registered Facebook authentication apps. After applying the hotfix, the 'Biography' field is no longer offered when configuring mappings of Facebook user profile fields (the field is not available in the Facebook API).
An error (System.NullReferenceException) could occur in certain cases while performing some types of operations on sites under heavy load.
When using the Items property of custom table objects in macros, the data was incorrectly cached. For example, when using the 'GlobalObjects.CustomTables["<customtablecodename>"].Items' macro, the latest data was not returned.
Enabling the 'Has depending fields' option for form fields caused an error in the resulting form when using certain form controls (Category selector, Department roles selector, Report selectors, User selector, Variation selector).
Custom ShippingOptionInfoProvider implementations could allow customers to get into a state when they could not finish their order. If customers changed their information or content of the shopping cart causing that the already selected shipping option was no longer applicable, customers were prevented from changing the invalid shipping option and could not continue with the checkout process.
When editing a primary key (ID) field in an alternative form, the field editor did not allow selection of any form control other than 'Label'.
If the system disabled cookies for a user via the 'Simple cookie law consent' or 'Cookie law consent' web part, an error (CSRF exception) occurred for each post request (button clicks, form submissions, etc.). The hotfix resolves the problem by changing the cookie level of the 'CMSCsrfCookie' cookie to 'System'.
In certain cases, adding an existing page under workflow to a campaign caused the loss of page type data entered on the page's Form tab in the Pages application.
An "Unsupported DLLs version" error occurred when applying the hotfix on some instances that were upgraded from older Kentico versions.
An error occurred when inserting inline widgets into newly created unsaved pages.
When creating a scheduled task in a web farm environment with the 'Create tasks for all web farm servers' option enabled, the scheduled task was not created for the server processing the request.
In certain cases, the value of the 'Settings -> On-line marketing -> Web analytics -> Excluded IP addresses' setting was not applied until the application was restarted.
Marketing automation processes got stuck when they contained a Wait step whose Timeout settings were set to a Specific day with the date or time in the past.
The 'Comment and move to specific step' action did not work when manually moving contacts between the steps of a marketing automation process..
When used in a form, the 'U.S. phone number' and 'Upload file' form controls generated a hidden <label> element in addition to the label in the form, which caused accessibility validation to fail for the form's output code.
File attachments were not displayed when viewing emails sent from Kentico in the default iOS email client. The problem occurred after applying hotfix 9.0.29.
Removing the UI element representing an application that was added to the system dashboard caused the dashboard to be blank. After applying the hotfix, the dashboard correctly displays applications after one of them is removed from the system.
The '{%OnlineMarketingContext.CurrentContact.ContactGroups.Count%}' macro for counting how many contact groups the contact is a member of did not resolve correctly because the 'ContactGroups' property of the 'ContactInfo' object was cached incorrectly.
When loading data with caching enabled, the system performed multiple load operations in certain cases if running in a heavy-traffic environment with a large number of concurrent requests.
When running in a web farm environment with a load balancer using non-sticky sessions, package files uploaded in the import wizard were not synchronized between servers, which could prevent the import from working.
If a screen lock occurred while impersonating a different user, the unlock dialog incorrectly required the credentials of the user who was being impersonated. After applying the hotfix, the unlock dialog accepts the credentials of the original user.
When deleting pages with the 'Redirect old URLs to another page' option, the system did not create staging tasks to update the page alias.
An error occurred when processing update and publish staging tasks for linked pages that were assigned to a category.
When the system synchronized pages under workflow with the 'Automatically update page alias' setting enabled, an error was incorrectly logged even though the synchronization was successful.
An error occurred when viewing metafiles on the 'System -> Files -> Metafiles' tab if the '(global)' option was chosen in the Site selector.
The Hotfix and upgrade utility did not display its buttons when opened on Windows 10 with 150% DPI scaling.
The system incorrectly performed set and remove session operations when handling requests using read-only session state, which caused errors (visible in the event log).
Enabling the file system (IO) debug through the information message on the 'System -> Files -> Debug' tab incorrectly enabled all types of debugs instead of just the file debug.
The 'Preferred content culture' user setting was ignored when using Windows authentication to authenticate users.
When working with widgets properties, the system was not able to identify properties as inherited from the parent web part in certain cases, leading to duplicated properties in the configuration dialog and incorrectly applied property settings.
When using Azure Blob storage to store files, the system performed an unnecessary number of requests when checking whether a file existed (for non-existing files). After applying the hotfix, information about non-existing Blob files is cached.
When using an external file storage provider (for example Amazon S3), uploading of media files caused the application to become unresponsive for the given user while the upload was in progress.
Updated the Amazon Web Services SDK for .NET to version 3.1.9.0. After applying the hotfix, the Amazon S3 file storage can be used with all Amazon data centers.
When working with the advanced search filter in the Users application, the 'Lock reason' selector incorrectly became disabled after the search was applied to the list of users.
Applications using a tree-based layout (for example the Pages application) could be incorrectly rendered if the tree contained a large number of elements.
In certain environments, the system handled 301 permanent redirects incorrectly if the target URL contained special characters. The redirects resulted in either an invalid URL or a page not found error.
If a page under workflow with child pages had its page name changed and the 'Automatically update page alias' setting was enabled, processing of the related page staging task caused an error.
The system loaded SQL query text to memory even when the SQL debug was disabled. This could lead to heavier memory usage and reduced application performance.
When a culture with a full localization pack was set as a site's 'Default content culture', macros containing 'Contact' objects were not resolved correctly in marketing automation processes for action steps placed after the Wait step.
The user impersonation dialog incorrectly displayed user accounts that were not enabled.
An error occurred when changing the code name of a custom setting key in the Modules application, if the setting had a site-specific value assigned (in the Settings application).
An infinite loop could occur when rebuilding page smart search indexes if the indexed content included certain types of strings with HTML comments.
The 'Has depending fields' setting did not work for fields using the 'Uni selector' form control with the 'Selection mode' set to Single or Multiple text box. The resulting form was not refreshed when the field's value was changed.
When using Output cache for pages with running A/B or MVT tests together with the 'Redirect invalid case URLs to their correct versions' setting configured to a different value than the default 'Do not check the URLs case' option, a blank page was displayed instead of the cached pages.
If the browser's Find dialog was opened by pressing CTRL + F while the cursor was present in the editor's area, pressing the 's' key caused the editor to behave incorrectly and attempt to save the content.
An error could occur when importing submissions translated via the Translations.com translation service if the 'Automatically import translated submissions' setting was enabled.
In a staging environment with multiple servers, the system did not create new staging tasks related to scheduled task objects when processing incoming tasks from another server (even if the 'Log staging changes' setting was enabled).
Pages smart search indexes incorrectly contained properties of the 'Editable image' web part instances. This could affect the accuracy and relevance of search results. After applying the hotfix, web part instances with a modified 'Web part control ID' are also excluded from the search index content.
If a user subscribed to a report while using a UI culture with a different date format than the en-US culture, the system incorrectly processed the value in the subscription's 'Data from last' setting. As a result, the time range of data displayed in the sent reports was incorrect.
The macro re-signing process was inefficient for certain object types. Applying the hotfix improves re-signing performance for instances with a very large number of objects or pages.
When used in a form, the 'Calendar' form control generated a hidden <label> element in addition to the label in the form, which caused accessibility validation to fail for the form's output code.
In certain cases, the winning variant of an A/B tested email was sent to the subscriber test group that already received one variant of the email.
Users with sufficient permissions for managing contacts and contact groups were not able to add contacts to groups on the 'Contact groups' tab of the contact editing interface in the 'Contact management' application. No contact groups were displayed after clicking the 'Add to contact groups' button.
When using custom event handlers for the 'URLRewritingEvents.ProcessRewritingResult' event, the 'URLRewritingEventArgs' parameter of the handler method was always null.
The Google Maps web parts did not specify API version when requesting data. This could cause the experimental Google Maps API version to be used, leading to potentially unstable functionality.
Collections of product and product variant objects in the macro engine were not loaded correctly due to a name conflict between the given macro collections. After applying the hotfix, products are available in the 'SKUs' collection and product variants in the 'SKUVariants' collection, for example {%SiteObjects.SKUs%}.
The system handled text incorrectly in the context of 'Azeri - Azerbaijan' cultures, which caused errors in certain scenarios. For example, an error occurred when creating or editing pages of the 'Page (menu item)' type on a site using Azeri as its content culture.
When using Windows authentication and contact merging, individual contacts and their logged activities were all merged into one contact after creating a new session.
When using Amazon S3 for file storage, uploading of large (100MB+) files was inefficient and sometimes failed without notifying the user.
An error occurred when using the 'Google Sitemap' web part to display the root page.
Pages smart search indexes incorrectly contained properties of the 'Editable image' web part. This could affect the accuracy and relevance of search results.
If all non-system web part property fields of a web part were deleted, an error occurred when displaying the web part's properties in the Pages application.
If an additional Page alias of a published page was set to the Alias path of an unpublished page, the system returned a 404 HTTP error for the given URL instead of displaying the published page on the live site. The issue was fixed in hotfix 9.0.17. However, the fix introduced other issues, which could, for example, cause incorrect page template inheritance in the Portal engine. After applying hotfix 9.0.35, the fix from hotfix 9.0.17 is removed. The issue will not be addressed in future hotfix versions.
When saving a page with larger attachments containing textual data, a race condition error could occur if the page was indexed by the smart search.
When using the 'Drop-down list' form control with the 'Allow edit value' control setting property enabled, a JavaScript error was logged in Internet Explorer 7.
Tracked links sent in email campaigns are converted to use lowercase URL paths by default. If you need to use case-sensitive URLs, you can override the default setting by adding the <add key="CMSLinkTrackerKeepUrlCase" value="true"/> key to your web.config.
If a Buy X Get Y discount was set to add a product to the shopping cart automatically for free, and the product was also included in another Buy X Get Y discount with a higher priority, the customer had to pay for the product and could not remove it from the shopping cart.
When using custom external authentication handlers to modify user settings, the setting fields were not correctly updated the first time users signed in.
The smart search incorrectly indexed HTML comments when building Page indexes and then returned them in related search results.
When indexing PDF file page attachments, PDF files containing certain Unicode characters were not included in the smart search index.
The 'Has depending fields' setting did not work for fields using the 'Drop-down list' form control with the 'Allow edit value' setting enabled. The resulting form wasn't refreshed when the field's value was changed.
When using deployment mode or the source control feature for external editing of object code, the 'Synchronize changes to database' function did not detect modifications of files correctly for objects with more than one externally stored code field.
When using content caching for web parts with a query data source (for example the 'Repeater with custom query' web part) together with a Where condition, the name of the cache item incorrectly contained the Where condition code. This made it difficult to work with the cache item in custom code.
When using inline widgets with localized display names, the tooltip with the display name could be incorrectly encoded in certain cases.
When using the 'URL selector' form control for a property of an inline widget, the Insert link dialog didn't show the Attachments tab.
When processing translations, the system created invalid resource string names in special cases causing an error to occur.
An error could occur when rebuilding page search indexes if the indexed content included unpublished pages. The problem occurred after applying hotfix 9.0.31.
The 'Separator' advanced setting of the 'List box' and 'Multiple choice' form controls was incorrectly set as required. This caused a validation error when saving a form field on the 'Fields' tab after creating the field on the 'Form builder' tab.
Google maps web parts did not work correctly in certain cases due to changes in the Google Maps API, which requires an API key to authenticate requests for data. After applying the hotfix, the Google maps web parts include a new property for entering the Google Maps API key.
An error occurred in certain cases when the system resized images (most commonly for files in the GIF format).
The continuous integration solution did not work correctly if a non-printing character (not valid in XML) was saved into the data of a tracked object or page. An error occurred during the serialization and the system produced incomplete and invalid XML files in the 'CIRepository' folder. After applying the hotfix, the system automatically removes such characters from text inputs.
Clicking check boxes or radio buttons within wizard interfaces with an active scroll bar (for example when editing fields in the new page type wizard) could break the wizard layout or cause incorrect scroll bar positioning.
When importing completed translations containing web part properties, an error occurred if one of the web part property values was longer than 200 characters.
If search indexing was manually disabled for certain system fields of pages (for example the 'DocumentID' field), an infinite loop could occur when rebuilding page search indexes.
Certain parts of the Kentico API did not dispose of objects correctly, which could lead to heavy memory usage and reduce the application's performance in some scenarios.
When handling the 'OnAfterSave' event of the BizForm control in custom code, the 'FormInserted' and 'FormUpdated' properties of the control were not set correctly.
When installing new Kentico instances from setup files with hotfix 9.0.7 or newer applied, the default template used by the widget dashboard in the 'My desk' application had incorrect content with invalid testing data.
When storing files in the file system, the binary data of attachments was not removed from the file system if the page with the attachment was deleted or destroyed.
When uploading a file on the Form or Attachments tabs in the Pages application, navigating to a different page or tab could incorrectly cause a confirmation dialog to show (even though the file was saved automatically). The problem occurred after applying hotfix 9.0.24.
When updating macro signatures in the System application, certain types of macro expressions located in the definitions of editor widget properties were not processed correctly and became invalid.
When editing inherited forms (for example properties of widgets or inherited web parts), the system incorrectly reported inherited fields as modified and the 'Reset field' button was active, even for fields that were unchanged from the original form. To fix the problem, you need to manually re-save the original form and then the inherited form after applying the hotfix.
Filtering products and searching for them in the Products application caused a database query timeout on sites containing thousands of products.
When using the 'Requires authentication' setting on the Security tab of pages, the page attachments could be accessible to unauthenticated users due to incorrectly cleared cache.
When using the 'URL selector' form control for a property of an inline widget, the Insert link dialog didn't show the Attachments tab. In order to fix the bug, two files need to be replaced, ~\CMS\bin\CMS.CKEditor.dll and ~\CMS\CMSAdminControls\CKeditor\plugins\CMSPlugins\plugin.js. The hotfix only replaces the CMS.CKEditor.dll library. To fully fix the problem, apply hotfix 9.0.33.
On sites with more than 2 content cultures running in an environment with at least 3 consecutive servers connected through staging, synchronization didn't work correctly when deleting culture versions of pages. When processing incoming 'Delete page' staging tasks, tasks for the next server were generated incorrectly, which could then delete the wrong culture version of the pages.
Updating pages under workflow generated an unnecessary number of search indexing tasks.
When using the 'Requires authentication' setting (on the Security tab of pages) for the site root page, the root page could be accessible to unauthenticated users due to incorrectly cached page version.
Conversion count reports for MVT tests displayed incorrect numbers.
When using localization expression in the 'Culture name' or 'Short name' properties of cultures, the values weren't localized in the text of the culture selector in the Pages application.
The 'Has depending fields' option of form fields didn't work correctly with certain types of form controls (for example the 'URL selector' and other form controls that internally utilized an update panel). The resulting form wasn't refreshed when the field's value was changed.
Emails sent from Kentico with image attachments that were added as inline email content were not displayed correctly when viewed in the Gmail client.
Importing a new site while using continuous integration for pages could delete the repository file representing the root page of a different site in certain cases. This could lead to sites displaying an empty content tree after restoring the page data back to the database.
The serialized data created by the continuous integration solution for inherited web parts was incomplete. As a result, certain property settings were incorrect after restoring inherited web parts from the repository. To fix the problem, you need to apply the hotfix and then serialize all objects again in the Continuous integration application.
When cloning one of the default alternative forms under the system's module classes, the new form was not marked as custom and its properties couldn't be modified.
When importing completed translations, content was not imported for editable regions that were created by placing 'CMSEditableRegion' controls directly into the page layout code of portal engine page templates (outside of web part zones).
When saving the metadata of a file in a media library on the 'Edit' tab, an unhandled error occurred if the file was being used by another process (for example video files opened for streaming).
Fields using the 'List box' form control were displayed as enabled in editing forms even if the field was disabled (for example using the field's Enabled condition or via content locking under workflow).
When saving the search settings for the fields of a data class (Page type, Custom table, Form, etc.), the serialized data created by the continuous integration solution was different every time, even for fields whose search settings were not changed (new GUID values were generated for the search setting items in the XML and their order changed).
The 'SearchTaskPriority' field (column) was missing in the field definition of the Search task class, which could lead to errors when using the smart search functionality in rare cases.
If one of the default system modules was selected when exporting sites or global objects, and the "Seal the selected modules" option was disabled, an unhandled error occurred and the export failed.
When using the Import Toolkit to update pages in a culture different than the global value of the instance's "Default content culture" setting, new pages were always created instead of updating the existing pages.
The serialized data created for inherited web parts by the continuous integration solution was different every time the web part was stored, even if the web part definition was not changed (new GUID values were generated for property definitions during every store operation). To fix the problem, you need to apply the hotfix and then manually re-save the property definitions for all inherited web parts (click Save for any property on the Properties tab in the Web parts application).
Increased the level of detail in event log entries for errors that could occur when initializing macro resolvers (helps determine the cause of the errors).
If an integration task failed, processing was blocked for any remaining queued tasks (even if the error was non-critical and logged using the 'ErrorAndSkip' value from the 'IntegrationProcessResultEnum' enumeration).
When deploying roles using staging or the export/import feature, page type and custom table permissions assigned to the roles weren't set correctly on the target instance.
Messages entered into the 'Display message after registration' property of 'Custom registration form' web parts didn't override the default text shown to newly registered users.
When the 'Use language prefix for URLs' setting was enabled, accessing page URLs without specifying the language prefix caused the system to perform unnecessary redirects. After applying the hotfix, the system performs only one permanent 301 redirect.
When the 'Check page permissions' setting was set to 'All pages', the Google sitemap URL was inaccessible even if the page permissions were configured correctly.
When using a collapsible field category that was collapsed by default, changes made to the fields contained in the category on the Form tab were not saved for pages under workflow in certain cases.
The macro report tool (System -> Macros -> Report) incorrectly reported invalid syntax for macro rules added to scoring rule conditions in the Scoring application.
Removing a user from a site (on the Sites tab of the user editing interface) removed the user from all community groups in the system, even those on other sites. After applying the hotfix, this action only removes the user from groups on the given site.
An error occurred when unsubscribing from forums and forum posts using the default unsubscription page.
When configuring fields using the 'Rich text editor' or 'BBcode editor' form control, the system didn't save changes made to the 'Use relative URL' setting under the 'Media dialog configuration' options.
An infinite loop occurred when the 'Multiple object binding control' form control had the 'Target object column name' property set.
Filtering by SKUs in listings and dialogs caused a database query timeout on sites containing thousands of products.
When storing media libraries in an external storage (for example Azure Blob storage or Amazon S3), the URLs to media files could be incorrectly generated in rare cases.
When creating a new media library, files uploaded into the 'Teaser image' field were not saved. The problem was introduced after applying hotfix 9.0.22.
When using the import feature to update an existing page type, an error could occur if the updated page type changed a text field to 'Required' with an empty default value.
When changing dimensions of images, the system didn't maintain the original DPI resolution and resized the images to the default resolution (96 DPI).
A database query timeout occurred when configuring a Buy X Get Y discount's "Get" condition on sites containing thousands of products with variants.
After applying the 9.0.21 hotfix on a source code installation and rebuilding the project, a warning message was displayed due to a missing XML comment of a method parameter.
When using the 'Show New button' option of repeater web parts, the button for adding new pages was not shown if the repeater was placed on an unpublished page.
Rebuilding Pages smart search indexes caused an error when an indexed page contained an instance of a widget whose definition was previously deleted in the Widgets application.
When using the 'Any word or synonym' search mode, the synonym search didn't work if a search filter was connected to the search web part.
When importing a site or object package containing marketing automation processes, triggers of the processes were imported twice.
When using the 'Tag selector' form control, trying to select tags caused an error if no tags were available or the page had no tag group specified. After applying the hotfix, the Select button of the form control is disabled when no tag group is specified for the page.
The 'Combine files with default culture' setting was not applied correctly when using the 'Insert image or media' dialog. This could lead to inconsistencies, for example when selecting images in the content tree of the dialog's Content tab.
When the 'Use language prefix for URLs' setting was enabled, accessing page URLs without specifying the language prefix led to a standard 302 redirect instead of a permanent 301 redirect.
Drag & drop functionality didn't work correctly for elements within web part or widget zones on the Page and Design tabs in the Pages application if the styles of the page applied relative positioning to the zones.
Page attachments were not displayed correctly when the 'Use URLs with trailing slash' setting was set to 'Always use URLs with trailing slash'.
When retrieving attachments from an 'Attachments' page type field in code via generated classes, the returned attachment items were not sorted according to the order defined on the page's Form tab.
When working with the advanced search filter in the Users application, selecting groups for the 'In groups' or 'Not in groups' fields did not add the values to the filter and the search condition was not applied.
When using domain alias redirection for sites, navigating to the site's domain alias URL in lower case format did not cause redirection set by the 'Redirect URL' property.
When cloning forms, the system didn't save the new form's code name correctly for the class representing the form. This could lead to inconsistencies, for example when selecting alternative forms of the cloned form.
Errors logged when attempting to install NuGet module packages created on a higher hotfix version than the target instance didn't contain sufficient information.
When editing a process for a contact in 'Marketing automation -> Processes -> Contacts', the 'Move to the specific step' selector did not have a scroll bar for displaying all available steps.
Validation didn't work correctly for form fields that were set as 'Required' and used the 'Rich text editor' form control. Values couldn't be saved if the only content was a void HTML tag (for example an image without any other content).
The 'Attachment field selector' form control did not save values selected via the 'more page types…' option. This affected the 'Attachment group' property of the 'Attachments data source', 'Page attachments', and 'Page attachments with effect' web parts.
When running unit tests using the Kentico testing API, an error occurred when attempting to filter data of faked in-memory objects based on a DateTime field.
When creating automated tests for a custom module class (Info object) using the Kentico testing API, running multiple tests in sequence for the same Info object resulted in an error.
When moving pages to advanced workflow steps that allow entering a comment, the comment text was not included in the notification email.
Calling the TreeProvider API in custom code for a page could cause the system to use an incorrect cached version of the page's data in special cases. For example, this could cause the system to move customized pages under workflow into an invalid state after the undo checkout action was used.
When using the 'Items' property of form objects (BizFormInfo) in macros, the data was incorrectly cached. For example, when using the 'SiteObjects.Forms.FormName.Items' macro (where FormName is a code name of a form), the latest data wasn't returned.
Restoring of pages using continuous integration failed if one of the restored pages already existed in the target database with the same name and location, but with a different page type.
When a contact unsubscribed from all email campaigns, the contact was not updated in Contact groups, Personas and Scoring using the macro rule "Contact is subscribed to a specified email campaign".
Invalid page attachment URLs did not return a custom 'Page not found' page (if configured for the site).
When updating existing users through the AD Import utility, user setting fields were not updated. User settings include the fields found on the 'Settings' tab when editing users (those stored separately in the CMS_UserSettings database table).
When using the Buddhist calendar in the Thai culture, processing of page views logged by Web analytics got stuck on the leap year day, February 29.
When using shared layouts for page templates, the system incorrectly allowed editing of the selected shared layout on the Layout tab of the Page templates application.
The Preview URL was incorrectly generated for pages based on the 'File' page type. Navigating to the generated Preview URL led to an 'Access denied' error.
When deploying MVC applications to Azure Web Apps or via Web deploy, the 'CMSDependencies' and 'CMSResources' folders were missing from the project.
The 'Media selection' form control displayed a broken preview image on the Form tab of pages if the image was uploaded to a cloud storage.
The 'Format' macro method didn't work correctly. The method is used to build text using composite formatting of string parameters.
When cloning custom tables, page types, forms, or custom module classes, the system allowed assigning of existing 'DB table name' values to the cloned object. After deleting the clone, the original object could not be deleted from the administration interface.
The applied product coupon code wasn't displayed when editing an order in the Orders application.
After applying hotfix 9.0.13 or newer, the 'Customer registration' activity was logged for every purchase. This activity is not used at all in the web part-based checkout process and is used only by the obsolete Checkout process web part.
When running parallel automated tests using the Kentico testing API, a race condition error occurred in rare cases.
Saving and publishing pages with fields of the 'Time interval' data type caused an error because the system used an incorrect format when loading the page data.
The 'Custom registration form' web part didn't redirect newly registered users to the URL specified in the 'Redirect to URL' property if a return URL parameter was present in the query string of the request's URL.
An error stating "Cannot write to a closed TextWriter" occurred when logging web analytics in certain cases (on sites under heavy load).
When opening the date and time selector for a field using the 'Calendar' form control, the default current time was incorrect if the time zone used on the client device was different than the server time zone configured on the Kentico instance.
Creation of new users via the REST service failed if the data of the POST request did not specify a value for the 'UserIsGlobalAdministrator' field (this field should not be required). The problem occurred after applying hotfix 9.0.15.
If an additional Page alias of a published page was set to the Alias path of an unpublished page, the system returned a 404 HTTP error for the given URL instead of displaying the published page on the live site.
Changing the code name of a page type caused fields of the 'Pages' data type to stop working and not display previously added pages.
When a user registered on a site using the 'Facebook Connect logon' web part, the email notifications sent to administrators about new users didn't contain the user's names and email address (even though the created user account contained the given data).
When viewing the list of content personalization or MVT variants defined for a web part zone, clicking the edit action for a specific variant caused an error and closed the dialog.
When running on Microsoft Azure, an error occurred when loading instances of web parts with a non-default layout assigned, if the layout contained a comment in the "<%-- comment --%>" format.
When importing media files into a site running in a web farm environment, the media files were not synchronized to other servers (the corresponding web farm tasks resulted in an error).
When restoring deleted users from the recycle bin, the user's Privilege level was incorrectly set to Global administrator.
The 'Editing form' property was missing on the General tab of the page type editing interface and could not be edited.
When continuous integration was enabled, customization of search settings didn't work for the fields of system objects (the 'Customize' option didn't work on the 'Search' and 'Layout' tabs of classes in the Modules application).
When counting the maximum number of allowed blogs for certain license editions (Base and Free), the system incorrectly included different language versions of the same blog page.
When generating code for custom module classes, the system produced invalid code in the summary sections of properties for class fields that had a 'Description' containing line breaks. This could cause compilation errors after the code was saved.
The submit button on the 'Personal settings' tab of the editing interface generated by the 'My Account' web had an empty text caption.
When using the external scheduler on instances with multiple sites, global tasks could be started multiple times in certain cases. This could cause "File not found" errors when running the 'Process analytics log' task.
Changing the value of a custom field set as the 'Page alias source field' of a page type did not update the 'Page alias' of a page.
An error occurred when posting data to the server (e.g. submitting forms) on pages that were served from the output cache. The problem was caused by a missing security token (cross site request forgery protection), which was not stored correctly in the browser when loading pages from the output cache.
The ViewState debug incorrectly displayed values for controls that had ViewState disabled (using ViewStateMode="Disabled"). Also, the 'Total size' value was not displayed in the ViewState debug on the live site if the 'Move ViewState to the end of the page' setting was enabled.
It was possible to add content personalization or MVT variants for layout web parts. This scenario is not supported and the option is now hidden for layout web parts.
Interfaces used in the DefaultCurrentContactProvider class, the default implementation of the ICurrentContactProvider interface, were originally marked as internal (for example IContactValidator). After applying the hotfix, the interfaces are public, which makes it easier to create custom ICurrentContactProvider implementations.
An error ocurred when adding widgets to widget dashboards (e.g. My Desk) when the 'Use check-in/check-out for objects' setting was enabled.
Sorting by columns on the Contacts tab in the Scoring application didn't work as expected.
When On-line marketing was enabled, the system performed unnecessary SQL queries when creating contacts, particularly when also using campaign tracking. The problem was caused due to insufficient caching.
It wasn't possible to disable logging of user agents for contacts in 'Settings -> On-line marketing -> Contact management'. Logging of user agents can now be disabled or enabled together with logging of IP addresses.
An error occurred when attempting to compare language versions of content only pages in Preview mode for content only sites. After applying the hotfix, the Compare feature is no longer available for content only sites.
Restoring of objects from the recycle bin did not work correctly for objects that belong under the same category object. Restoring of the first object succeeded, but an error occurred when attempting to restore other objects belonging to the same category.
Special characters in a group's 'Display name' were incorrectly encoded in the subject of group notification emails.
When buying variants of products, the limitations of minimum and maximum numbers of units in one order weren't checked. This enabled customers to bypass these limitations and buy unpermitted amounts of product.
When modifying product properties on a multilingual site, values weren't saved if the selected culture was different than the site's default culture.
The customer registration activity wasn't logged in the checkout process. Customer membership wasn't created for the contact.
If an error occurred during the synchronization of a web farm task, the system attempted to call a non-existing query in certain cases, which generated an unrelated error. After applying the hotfix, synchronization errors are correctly recorded into the result of the corresponding task.
If the 'Required' flag was disabled for a field set as the 'Page name source field' of a page type, the system did not correctly switch the page name source to an extra field (the Page name source field selector was updated, but the actual value was not changed). As a result, the default 'Name' field was not shown on the Form tab of pages in these scenarios.
After applying hotfix 9.0.4 or newer, the application returned empty responses in certain cases if the Content-Type was different than the 'text' subtype (for example REST or Web API responses). The problem occurred only for URLs covered by the system's output filter.
When resigning macros in System -> Macros -> Signatures with an 'Old salt' value specified, macro expressions containing open conditions or loops were incorrectly rewritten into an internal format that was difficult to read. After applying the hotfix, you need to re-save any open macros where the problem occurred.
Hotfixes did not update the assembly version information stored in the GlobalAssemblyInfo.cs file. This could cause incorrect and confusing version numbers for assemblies when compiling the source code edition of Kentico.
After saving changes to the fields of a page type, the continuous integration solution did not correctly update the serialized data representing the given page type in the 'CIRepository' folder. The problem occurred after applying hotfix 9.0.8 or newer.
If an error occurred when restoring continuous integration data to the database, the API (and ContinuousIntegration.exe utility) did not provide sufficient information about the source of the problem.
Some browsers displayed browser-specific validation messages within the campaign editing form (the form's 'novalidate' attribute was not specified).
When using global event handlers based on the 'SaveAttachmentVersion' or 'RemoveAttachmentVersion' events from the WorkflowEvents class, the 'Attachment' property of the WorkflowEventArgs argument was not initialized.
When updating users, the AD Import utility did not remove assigned domain roles in cases where the user was no longer a member of the corresponding group in the Active Directory domain.
'Pages' type smart search indexes incorrectly contained the full property definitions of editor widgets placed on the covered pages, including metadata. This could affect the accuracy and relevance of search results. After applying the hotfix, only the content of 'text' or 'long text' widget properties is indexed.
When publishing pages under workflow that used the 'check-in/check-out' feature, their Sitemap priority property (Page properties -> Navigation -> Search & SEO) was set to 'Lowest' if the original value was 'Normal'.
Deleting a Field without database representation with its Data type set to 'File' in the Page types application caused an error.
Marketing automation triggers based on the 'Internal search' or 'External search' activity didn't work (the marketing automation process was not started for contacts who performed the activity).
The macro autocomplete help didn't display the list of available methods correctly for macro namespaces and objects if one of the method's names contained the name of the given namespace or object.
On installations without the 'On-line marketing' component, visiting the live site could result in error after applying the 9.0.10 hotfix.
When the 'Combine with default culture' setting was enabled, accessing a page in an untranslated culture resulted in an error, instead of displaying the default culture version of the page. The error only occurred after applying hotfix 9.0.10.
Campaign statistics and conversions weren't logged when accessing cached pages (using output caching). The error only occurred after applying the 9.0.10 hotfix.
When displaying the details of pages with an underscore character in their Page alias, listing web parts used the Default transformation instead of the Selected item transformation.
When running the scheduled tasks for collecting Facebook and Twitter insights using the external scheduler, a license limitation error occurred, even though the correct license was available.
Planning of the next run time for scheduled tasks resulted in an infinite loop in rare cases. This could cause very high CPU usage on the server.
When accessing the 'Parent' property of page (TreeNode) objects in macros or transformations, the parent element did not contain data for the fields of specific page types.
Web farm license limitation errors were logged into the event log in certain cases, even if web farms were disabled and the system did not contain a license supporting web farms.
When creating export packages on instances with multiple sites, the exported data could contain bindings between objects from the wrong site in rare cases.
When initializing the Kentico API in an external application, the system incorrectly attempted to redirect to pages located within the Kentico project files in certain cases (for example when handling errors). These pages typically do not exist in external applications.
When downloading a file from the content tree in a webkit-based browser, its filename was encoded if it contained special characters (i.e. data,file.txt was encoded to data%2cfile.txt).
Subscribers who were subscribed to an email campaign as part of a contact group didn't receive a confirmation email when they unsubscribed from the campaign.
If the recalculated prices after changing the main currency were close to zero, transformations on the live site could have rounded the number to zero and then thrown exceptions because of dividing by zero.
The continuous integration solution did not track the creation of binding objects when adding new sites using the wizard in the Sites application.
Conversions of alternative page variants of A/B tests weren't displayed among reports in the Campaigns application.
When using the MultiDocumentQuery API, calling the 'Page' method caused the query to ignore ordering specified through 'OrderBy' methods.
Paging didn't work when using the 'Universal viewer with custom query' web part if the 'Load individual pages' property was enabled and the 'Cache item name' property was set to a custom value.
When running in a web farm environment, updates to the content of resource strings didn't invalidate the resource strings cached on other servers in the web farm. As a result, old resource string content was displayed until the cache was cleared for the given server.
When viewing the application list in the Chrome browser, the search box was pre-filled with the current user's username if the login credentials were saved in the browser and the Chrome Autofill feature was enabled.
When viewing staging tasks on the 'Pages' tab of the 'Staging' application, the titles of the listed tasks did not provide clickable links to the related pages.
Scheduled tasks with the 'Period' property set to 'Month' were not planned correctly (the system did not set a 'Next run' time).
Child pages of linked pages weren't updated by the continuous integration solution when the source page of the link or one of its ancestors was renamed, moved or deleted.
'Universal viewer' and 'Universal viewer with custom query' web parts caused an error if Paging mode was set to 'Postback' and the Pager position was set to 'Bottom' or 'Top and bottom'.
Page search indexes didn't work correctly if the indexed content included pages whose parent was excluded. When the content of such pages changed, the search index wasn't updated.
Saving changes made to the Owner field of content only pages on the General tab in the Pages application caused an error.
Scripts used in the administration UI were loaded on the live site in anonymous sessions when not required.
When editing sites in the Sites application, it wasn't possible to assign or remove page types for the site on the 'Assigned objects -> Page types' tab.
When calling the 'Where' macro method for a collection of objects within a text transformation, the method worked correctly only for the first item to which the transformation was applied.
When creating or modifying campaign emails, a duplicate scroll bar was displayed on the screen.
After renaming a field of a page type, the continuous integration solution did not update the serialized data representing pages of the given type (i.e. the 'fields.xml' files of individual pages stored in the 'CIRepository' folder).
When restoring culture versions of pages from the recycle bin, attachment files stored in page fields were not restored correctly.
The smart search crawler does not index pages on HTTPS sites without a certificate from a trusted authority. If you need to use self-signed certificates, you may override the certificate validation by adding the <add key="CMSSearchCrawlerAcceptAllCertificates" value="true" /> key to your web.config.
When deleting a site, the system did not remove the database tables storing the data of forms assigned to the given site.
The 'HTML editor toolbar set' property of the Editable text web part did not resolve macros in on-site edit mode.
When using link tracking for campaign emails, the system didn't consistently store the links in lower case in the database. The issue does not affect the link tracking functionality and was only fixed for the purposes of consistency.
When the Blogs application live tile was added to a user's dashboard, the system could not retrieve the number of blog posts and caused an error.
When configuring the 'Blog comments' widget, an error occurred after changing the value of the 'Site name' property. In general, the problem could be triggered by postbacks during the configuration of any web part or widget with a property based on the 'Blog name selector' form control.
The SKU property of the 'ProductOptionSelector' control didn't contain a setter. Selecting an option's SKU with the control therefore required an unnecessary database request when getting the SKU with its ID.
Orders could have been created with a payment method which wasn't applicable when no shipping was required.
If a page under workflow had the "Publish from" date set in the future, editing the page and moving it to the published step did not log a corresponding "Publish page" staging task (the task was logged only after the publish date). After applying the hotfix, the staging task is logged immediately, which allows synchronization of the page's published state with a set "Publish from" date.
When using on-site editing mode as an editor without the administrator privilege level, content defined through the 'HTML envelope' properties of web parts was incorrectly displayed for web parts that were not visible.
An error occurred when using listing web parts to display related pages defined through a field of the 'Pages' type (advanced content modeling) in combination with columns specified in the Columns property.
Email notifications about new data records submitted for forms and autoresponder emails incorrectly displayed time values for fields of the 'Date' data type (in addition to the entered date).
After deleting a UI element with child elements, the child elements were not displayed in the recycle bin. Restoring the parent did not restore the child elements.
When replacing existing avatar images, the new image was not saved if uploaded directly after the old image was removed without first submitting the change.
Automated tests inheriting from the 'CMS.Tests.IsolatedIntegrationTests' base class failed due to database timeout errors under certain circumstances.
When using web analytics, the system generated unnecessary SearchLogHit requests when searches with empty keywords occurred on the site.
An error occurred when synchronizing "Update page" staging tasks on instances without an EMS license (Ultimate or lower).
Highlighting of keywords in smart search results didn't work correctly when using the 'TextHelper.OnBeforeRemoveDiacritics' event to customize handling of diacritics in a way that replaces special characters with a string of a different length. Note that the search does not highlight text with diacritics in scenarios where the search keywords contain the equivalent string without diacritics (even after applying the hotfix).
Scheduled tasks configured to be executed by the external scheduling service incorrectly displayed warnings about late execution in certain cases.
Restoring culture versions of pages from the recycle bin could cause an error if the first restored version was not in the site's default culture.
When calling the 'ToString' macro method for DateTime or TimeSpan values with a formatting string parameter, the specified format was not applied to the result.
The 'Check bounced emails' scheduled task does not work when executed using the external scheduling service. Applying the hotfix disables the 'Use external task' property for the task on all existing sites. If you use the external scheduling service, you may need to manually disable the property for new instances of the task after creating or importing a new site.
On installations without the 'On-line marketing' component, an error occurred when adding a new customer during the creation of an order in the 'Orders' application.
The 'MultiFileUploader' control displayed an invalid message in scenarios where the number of uploaded files exceeded the maximum allowed number set through the 'MaxNumberToUpload' property.
When using a separated on-line marketing database, the action for removing all accounts from a contact group didn't work and an error was logged into the event log.
Web parts containing a page data source (for example the 'Repeater' or 'Universal viewer') could lose their cached data in scenarios where a custom value was set for the 'Cache item name' property. The problem usually only occurred on sites with heavy traffic.
Authentication of users did not work after setting the 'CMSUserSaltColumn' web.config key to a custom value.
Automated tests inheriting from any of the CMS.Tests base classes failed when located in a project outside of the Kentico solution folder (CMS).
Calling the 'TreeProvider.SelectNodes' method resulted in an error if the parameters were configured to retrieve multiple page types and a data column shared by at least two of the page types.
An error occurred when loading the list of scheduled tasks if any task had its "Period" property set to "Once" and was configured to be processed by the external scheduling service.
When generating authentication hash parameters in 'Settings -> Integration -> REST', the system produced invalid hash values for certain types of URLs.
Entries in fields of the 'Date' data type incorrectly displayed time in addition to the entered date on the 'Recorded data' tab of the form editing interface. Use the 'Date and time' data type for recording both date and time in one field.
The 'Save as new page template' dialog did not work correctly when editing UI elements in the Modules application.
Widget properties were initialized in an incorrect life cycle phase. This caused the View State values to be overwritten by the initial values in certain cases.
An error occurred when attempting to delete the default global administrator user account. After applying the hotfix, the system blocks the deletion and provides information about the need to first set a different default administrator through the 'Settings -> System -> Default user ID' setting.
The tooltips of columns in object listings provided by the UniGrid component did not localize resource strings in the displayed text (even if localization was allowed for the given column in the grid definition).
When creating a site through the new site wizard, the master page selection step incorrectly displayed ad-hoc master templates. Only shared master templates are offered for selection after applying the hotfix.
The direct page selector used by the 'Universal viewer with custom query' web part did not behave correctly after setting 'Paging mode' to 'Postback', enabling 'Use update panel' and changing values using the 'directPageControl' control (used in the 'CMS.PagerTransformations.General-DirectPage' transformation).
When the page name of a page was changed, the change was immediately reflected to the published page URL. This happened only when the 'Use name path for URL path' setting was enabled.
The design of the dialog used when separating the contact management database was broken.
The 'A/B tests', 'MVT tests', and 'MVT variants' tabs in the Pages application were incorrectly available on licenses lower than EMS. Opening them caused an 'Access denied' error.
Form notification emails contained broken links to files submitted in the form.
Objects restored from the recycle bin could overwrite existing objects with the same code name in certain cases. The system now always informs users that an object with the same code name already exists and must be changed before restoring.
When a UI element with content defined through an ad-hoc page template was deleted, the system did not automatically delete the given template.
The 'CartCanBeUsedOnSiteInternal' method from the 'ShoppingCartInfoProvider' class returned an inverse result to what was specified in the description. This could cause issues if used for customization.
An unhandled error occurred on the 'System objects' tab in the Debug application in certain cases after the recalculation of an on-line marketing score.
A link in the contact management application displayed in a message informing about inactive contact deletion now leads to a more relevant documentation page.
When viewing the application dashboard while the currently selected site was offline, the dashboard tiles logged a large number of unwanted errors into the system's event log.
When restoring continuous integration data to the database, staging and integration bus tasks were not logged for the changes made to objects.
GoogleBot was not included in page hit statistics when the 'Exclude search engines' setting was disabled.
The 'Countries - Weekly report' chart had a typo in the display name and the graph displayed no legend.
Certain 'Top landing pages' reports in the Reporting application displayed incorrect data.
When creating new UI elements for custom modules, the 'Clone template as ad-hoc' was incorrectly displayed if a template was selected for the element content before the element was saved. Creating ad-hoc templates for unsaved elements does not work correctly and causes errors.
Cropped image values were not deleted when switching from the 'Crop' tab. This caused an error when performing resizing of the image afterwards.
Background images were not loaded when viewing a forum on the 'View' tab in the Forums application.
Creating forums was not possible when using Kentico with a Base license.
When using a license other than EMS, creation and editing of email campaign emails led to an 'Access denied' error.
Accessing the 'Opt-out list' tab in the Email marketing application could result in an 'Unable to render embedded object' error. This was caused by an incorrect permission check.
The 'DocumentURLPath' field value was cleared when restoring multilingual pages using Continuous integration.
When exporting raw database data from the OM_Contact table via the administration UI, the Advanced export dialog did not list all the database table columns.
Switching to the Report tab caused an error if the pages assigned to the campaign used two or more conversions with the same display name.
Users could launch a marketing campaign with no display name filled in certain scenarios.
Retrieving attachments via the 'TreeNode.GetFieldAttachments' method did not work when working with an 'Attachments' page type field that used a singular form in its 'Field name'. For example, the problem occurred when retrieving attachments via generated code from a page type field with the 'ArticleImage' field name and 'Attachments' data type.
When assigning a parent object to a site binding object, the site referenced within the binding was incorrectly overwritten if the parent object had its own SiteID column with a different value. For example, the problem could occur when using the API to assign a page template parent to a template-site binding.
The export wizard stopped working in certain cases when exporting sites on instances using an external file system storage provider, such as Amazon S3 or Azure Blob storage.
When working with the HTML editor on the live site (for example with User contributions), images whose source contained content other than a URL were saved in an invalid format.
The license check for the number of allowed web farm servers was cached incorrectly, which could cause the system to display licensing errors with a delay when a server was added over the allowed limit.
When editing users on the Password tab in the Users application, the system always used global email templates for the changed password notification emails sent to the users, even if site-specific templates were defined.
When editing campaigns in the 'Campaigns' application in Internet Explorer 11, fields with invalid input were styled incorrectly after adding existing emails or pages.
Information messages displayed in the transformation editing interface covered the editing form's fields in certain cases.
When the list of tasks in the Staging application was filtered based on User, Group or Type values, the 'Synchronize all' and 'Delete all' actions were applied to all tasks instead of just the filtered sub-set.
Certain 'Browser capabilities' reports in the Reporting application displayed incorrect values. The Web analytics application was not affected.
Certain 'Top exit pages' reports in the Reporting application displayed incorrect data.
The 'URL path' of a page was not consistent with the 'Name path' in certain cases when using the 'Use name path for URL path' setting.
Attachments of deleted pages were not stored in the recycle bin, which led to missing attachments for restored pages.
Building an Azure project without packaging the project displayed an error in Visual Studio even though the build succeeded.
When saving content on the 'Web parts' tab of a page template in the 'Page templates' application, certain types of macros in the web part content became invalid.
The 'New message' dialog used when editing a group message board displayed incorrectly.
The 'Department sections manager' form control caused loss of editable region content when updating pages of the the 'IntranetPortal.Department' page type.
The 'Uni selector' form control didn't work correctly in certain scenarios when using multiple selection mode and the vertical bar character ('|') as the values separator. For example, the 'Remove selected' button removed even items that weren't selected.
If a new product was created based on a product page type with a specified default department and the department was changed when creating the product, the default department was used anyway.
The UTM campaign code on campaign email 'Overview' tab did not work as a link.
Campaign related conversion statistics were not logged on the 'Details' tab of conversions.
Added security improvements to the application.
Device profile previews and the page displaying the administration interface (UIPage.aspx) were vulnerable to cross-site scripting attacks.
When re-saving existing macros with security signatures in the 'Settings' application, the macro was stored in an invalid format and stopped working.
Macros containing "I" or 'i' characters were not resolved correctly within the context of the Turkish culture.
The panel at the top of a page was not refreshed after rolling back a version of a page.
When using Internet Explorer in compatibility mode, web analytics logged statistics for the browser's compatibility version, not the real one.
The smart search incorrectly stored the Content field within index files, which resulted in unnecessarily large files and lower indexing performance. If you need to view the Content field when inspecting or debugging index files, you can revert the change by adding the <add key="CMSSearchStoreContentField" value="true" /> key to your web.config and rebuilding the index.
The health monitoring counters that track the number of event log errors and warnings incorrectly counted asynchronously logged items twice.
Service initialization errors could occur in rare cases if the site came under load during the initialization of the application (for example after the application pool was recycled).
The labels associated with the 'Radio buttons' and 'Multiple choice' form controls were generated with invalid markup according to HTML5 validators.
The 'Insert image or media dialog' did not reflect the user's 'Starting alias path' when selecting files on the 'Content tab'. The root page was always selected when first opening the dialog instead.
The order of page categories was not preserved after an import.
Editing a page from a site on a different domain through the Pages tab in the Workflows application redirected the user to a URL with an incorrect domain.
Widgets could not be saved on pages that contained invalid image URLs (without the ~ or / character). The save operation resulted in a failed view state validation error.
Sending linked pages for translation via translation services did not work and the system displayed a 'No culture to translate from' error.
When rolling back an object or page to a previous version in the version comparison dialog, users were redirected to an invalid address if they clicked Cancel in the confirmation dialog.
Creating a new media library and using the TAB character in the 'Folder name' value caused an error when the folder was created.
An error occurred on the 'Files tab' in the Media library application when using a preferred content culture that had no pages translated. This only happened when using Windows authentication.
Instances with a <location inheritInChildApplications="false"> tag in their root web.config file weren't handled correctly. For example, an error occurred when attempting to open such instances in external utilities (such as the 'Kentico Service Manager' or when using the 'Modify' option in the installer) and the system failed to write connection strings into the web.config.
Page not found events were logged without the SiteID value in the Event log.
A/B test variant and MVT combination information was not saved to Page visit activities when using JavaScript to log analytics and activities.
When using the file system output cache, running the 'Delete old file system cache files' scheduled task resulted in a "Directory is not empty" error in certain cases.
The 'GetResource.ashx' handler didn't set the 'Content-Length' HTTP header when responding to file requests.
When trying to retrieve geolocation data from specific IP address ranges, an 'Index was outside the bounds of the array' error occurred.
Changing the template of a page under workflow using 'Clone template as ad-hoc' propagated unpublished content to the page's published version.
Complex macros working with the 'Subscriber' object did not resolve correctly when sending campaign emails.
If UI personalization was enabled, access to the 'Manage user roles' dialog in the Users application was denied for users without the administrator privilege level (even if the user had the required permissions and UI personalization settings).
Macros for loading the value of a form's primary ID field (for example '{%FormID%}') didn't work correctly when inserted into the 'Redirect URL' field on the form's General tab. The macro always returned '0'.
Form control properties with a culture-dependent data type (such as date or decimal number) didn't load their values correctly in the 'Editing control settings' section of the field editor when using a non-English UI culture.
When changing the main currency with enabled recalculation of prices, exchange rates or other e-commerce objects, the system displays a confirmation dialog box. The 'Cancel' button in this dialog confirmed the recalculation instead of canceling it.
The 'Trusted certificate thumbprint' setting used when configuring claims-based authentication didn't correctly handle special types of input, such as whitespace or non-printing characters. This could lead to untrusted certificate errors if such characters were copied into the setting along with the thumbprint.
The UniGrid didn't use the CMS.IO API when loading the XML configuration file. As a result, UniGrid components failed to load their configuration if the specified file was mapped to a non-default location using a storage provider.
Widget zones did not work in conditional layouts that were used on multiple page levels. Widget properties did not open.
The 'LoremIpsum' macro method didn't work when called with an integer type parameter. For example: 'LoremIpsum(100)'
The 'Payment form' web part displayed the names of payment methods incorrectly if their display name was localized.
The Forum group web part did not page threads correctly when the 'Enable posts paging' property was disabled.
Special sequence of requests with a specified Range HTTP header could result in the application serving incomplete images from the cache.
Improved the performance of an email marketing fix introduced in the 8.2.40 hotfix.
After applying hotfix 8.2.40, changes of a customer's details performed during the checkout process weren't saved if an existing address was used.
If a user accessed a site that banned their IP address, the system then incorrectly blocked the user from other sites running on the same instance, even if the IP address was allowed on the given sites. The incorrect ban persisted until the application was restarted.
If the process of generating campaign emails failed (due to a timeout or an application error), more than one of the same email could be generated into the email queue when the process ran again.
The shopping cart price wasn't updated when the billing or shipping addresses were changed in the 'Customer address' web part in certain cases.
When copying products, the department's default tax classes were assigned to the new product instead of assigning the original product's tax classes.
Calling the 'LocalizationHelper.GetUniqueResStringKey' method in custom code could cause an exception with certain combinations of the 'plainText', 'resKeyPrefix' and 'maxKeyLength' parameters.
When using claims-based authentication, the system generated invalid sign in and sign out URLs if the 'Identity provider URL' setting contained query string parameters.
When developing a custom 'ResourceStringInfoProvider', it wasn't possible to override the 'GetString' and 'GetStringFromDB' methods using the standard provider customization pattern.
When using the DataQuery API (or ObjectQuery/DocumentQuery), calling the 'Distinct' method caused the query to ignore sorting in descending order, for example when combined with the 'OrderByDescending' method.
The ‘Edit page link’ web part created incorrect links to pages that contained a comma character ‘,’ in their name. The 'returnUrl' query parameter used by the web part was renamed to 'editUrl' to handle collisions when logon was required before editing a page.
WebMethod calls that accessed the URLs of pages defined in the Kentico content tree returned a 404 error. The issue only occurred when registering custom WebMethods in the code behind of pages.
Sorting in the ‘Listing’ mode of the Pages application did not work correctly when changing the 'Items per page' value.
When retrieving versioned pages of a custom page type that contained a ‘Decimal number’ data type field, an application error occurred when converting the decimal value. This happened in culture versions that use decimal commas as a decimal mark.
Database installations to SQL LocalDB failed on systems that had Microsoft SQL Server 2014 or newer installed.
When saving a form field that had a macro inserted into its default value, the system didn't store the field's value in the correct culture format. This could lead to incorrect behavior when working with culture-dependent values, such as dates and decimal numbers.
When one culture version of a product page was deleted, the product's SKU object was also deleted leaving the other culture versions without the SKU.
The shopping cart price wasn't updated when the billing or shipping addresses were changed in the 'Checkout process' web part.
When using a web part or control with a custom query data source, the generated query was invalid if a table alias prefix was used for a value in the 'Columns' property together with brackets: "TableAlias.[Column]". The invalid query then caused an error when the control or web part loaded its data.
Web analytics did not log page views on Kentico installations with the Ultimate license. This issue was introduced in the 8.2.13 hotfix.
The 'Pages' tab of the 'Staging' application didn't work correctly after clicking the 'click here for more...' option in the page tree. This problem could occur on sites with a very large number of pages on a single level of the tree.
When copying pages with the 'Copy page permissions' setting enabled, page-level permissions inherited from parent nodes weren't transferred correctly to the new page in certain cases.
The system was making unnecessary requests to folders located on an external cloud storage. These requests ended with error 404, which was handled internally. Reducing the number of requests improved the performance of the system.
When sending emails in a web farm environment, servers processed emails that were already being sent out by another server under certain circumstances. This could result in duplicate emails being sent to the target SMTP server.
When using the API to manage custom data for users, calling the 'Remove' method for the 'UserCustomData' property of UserInfo objects didn't correctly remove values from the custom data container.
If a folder doesn't exist on the local file storage (it exists only on the external storage) and you wanted to cache the folder with a cache dependency on changes in a file inside of the folder, the system threw an exception. The exception was fixed, however note that automatic clearing of server-side file cache does not work for files in external storage.
Moving pages in a non-default culture resulted in an incorrect page ordering when the 'New page order' setting was set to 'Alphabetical'.
Email campaigns were incorrectly cloned together with their clicked link and opened email statistics.
The 'Visitors' and 'Conversions' A/B test statistics were not updated correctly when a visitor arrived on the tested page via a page alias.
When the 'FilterDuplicates' method was used for DocumentQuery parametrization and the result set contained several culture versions of pages, all culture versions of pages were incorrectly filtered out together with duplicate linked pages.
Setting the 'SelectOnlyPublished' property value in the code file of the 'Breadcrumbs', 'CSS list menu', 'Tree view', 'SQL search dialog with results', 'Paged text', 'Grid', 'Datalist', 'Universal page viewer', 'XSLT viewer', and 'Image gallery' web parts didn't work correctly.
The validation process used by web parts to protect against unvalidated redirects didn't work correctly and failed in certain cases. For example, the problem occurred if a 'Logon form' web part was configured to redirect to an absolute URL after successful login.
Pages were not removed from smart search indexes correctly after being moved to a location excluded by the index.
Calling the 'Exists', 'Any', 'All' or 'Filter' methods for a collection of objects inside a macro expression could cause subsequent macros to resolve incorrectly in special cases.
The 'Group profile' web part did not display the forums of the specified forum group.
The 'Enum selector' form control (and its inherited controls) caused an unhandled error in certain cases if the given field was null or contained a value that wasn't defined by the given enumeration. The control now falls back to the default value in such cases.
When viewing lists of files with 100 or more items per page (for example in media libraries or page attachments) the page loaded slowly and an Unresponsive script warning appeared when using the Firefox browser. The hotfix optimizes the loading of the file uploader component, which improves the performance of large file lists.
Workflow page actions (Publish, Approve, Reject, Check in, and Check out) could lead to an error. This happened when the page name ('DocumentName' field value) was changed while editing the page and the 'Automatically update page alias' setting was enabled.
After applying hotfix 8.2.31 or 8.2.32, sending of emails to the default SMTP server failed in certain cases. The email sending returned the following error message: "Length of the data to decrypt is invalid."
The system did not log data for the 'Referrals' and 'Referring sites' statistics and contact management activities saved incorrect values into their URL referrer field. The problem was brought by applying hotfix 8.2.13 or newer.
The link checker built into the Pages application reported errors for valid links in certain cases.
Password reset requests submitted by users failed in certain cases on websites under heavy load (database deadlock errors were logged into the system's event log).
If the 'CMSRenewSessionAuthChange' web.config key was set to true (as protection against session fixation attacks) and the system had screen locking enabled for the administration interface, a ViewState validation error occurred when unlocking the screen.
Clearing the value of the 'Owned by group' property of a page and selecting 'Inherit by child pages' caused an error.
The WYSIWYG editor was not displayed in the 'Edit text' dialog when using On-site editing in Internet Explorer 11.
When using claims-based authentication with Active Directory Federation Services, users were redirected to the site's home page after logging in through the authentication service, instead of returning to the originally requested page.
When viewing the 'Tasks' tab in the Smart search application, the selector for web farm servers didn't work correctly and a macro resolving error was logged into the system's Event log.
Creating linked pages on an instance that used a load balancer resulted in an error.
Relationship data was not included in the Recycle bin. Users were able to restore relationship names only, not individual page relationships.
Applications that work with pages, such as 'Pending pages' and 'My pages' did not always display the latest data for unpublished pages.
The protocol part of URLs was removed from links edited using the 'Insert/Edit link' dialog (in the WYSIWYG editor). This happened with links that used a protocol not listed in the 'Protocol' drop-down list, for example, with File URI scheme links.
When using file system output cache in combination with a shared storage provider (for example Azure Blob storage), the system generated errors in certain cases if multiple instances accessed the same cache files simultaneously.
When editing a site's object bindings in the Sites application on the Assigned objects -> Classes tab, the selection dialog incorrectly included classes other than Page types and Custom tables.
When running on Microsoft Azure with persistent output cache enabled for pages, the SmartSearchWorker role periodically cleared the output cache files stored on the blob storage. This could reduce the live website's performance after application restarts.
Queries created for classes when developing custom modules weren't included in the module's export package.
When exporting data to CSV files, the system added two empty lines at the end of the file (only one empty line is generated now).
The application dashboard did not always load correctly the first time it was requested. This happened to users with slow connections to the server.
Updating a page template category caused the system to log a large amount of unnecessary staging tasks.
When using the 'Bizform selector' form control in an integer type field, it wasn't possible to overwrite the first value saved for the field in the resulting form.
When using more than one instance of Azure Cloud Services, an unhandled exception sometimes occurred during the import or export of a site.
Data in the 'Property' value of the 'Set contact property' marketing automation action was not always saved in the 'en-US' culture format, which could lead to errors in certain cases.
The "if" command used for conditions in macro expressions was resolved incorrectly if either of the letters was written in upper case (the "else" branch of the condition always resolved into a null value).
The macro report tool (System -> Macros -> Report) incorrectly reported invalid signatures for certain types of macro expressions containing open conditions or loops.
When saving a form containing one or more fields that used the 'Rich text editor' form control, an unhandled error occurred during the validation in special cases if the given fields were empty.
When loading media files using getmedia requests, the system performed unnecessary file system operations. This could lead to errors in certain cases when using an external storage provider (for example Azure Blob Storage).
Products in bundles that weren't allowed for sale weren't displayed in the user interface of the bundle. Such products then couldn't be removed from the bundle.
Users were not invalidated in the cache correctly after assigning or removing a membership to or from the user.
Adding new pages and posts in the Facebook application did not work due to changes in the Facebook API.
Smart search filters didn't work correctly on pages that used a culture with special letter case rules (for example the Turkish culture). The system converted the names of search index fields in the filter option definitions to lower case incorrectly within the context of such cultures.
When opening the Kentico Installation Manager utility in scenarios where no supported instances were available, an unhandled error occurred in certain cases.
<p>When editing the page-level permissions (ACLs) for a user or a role, users were not prompted to save the changes when switching to another user or role in the list.</p>
<p>Cloning a custom table item by a user without the 'Modify' permission for the Custom table module caused an error.</p>
<p>When using a custom widget based on the 'Editable text' web part, the content of the widget was not translated using the translation service.</p>
<p>When web farms were set to generate servers automatically with the URL notifications synchronization mechanism enabled, the server root URL didn't contain the http(s) protocol which could result in the following error in the event log: "Message: Invalid URI: The format of the URI could not be determined."</p>
<p>The 'Visibility' property could not be used to control the visibility of tabs in the 'Tabs layout' web part.</p>
Requests for files without a specified extension were not redirected to the correct URL when the site used extensionless URLs for files and the 'Redirect pages to main extensions' setting was enabled.
The 'Is Empty' and 'Is Not Empty' filter options, for example in the 'Listing' mode of the Pages application, returned a '[UniGrid] : Error during reloading data' error in certain cases.
The 'Select tags' dialog was not displayed correctly on pages with a large amount of tags.
An error could occur when checking the availability of a staging server if the server used a service URL with the https protocol and a certificate that was not issued by a trusted authority (in combination with the CMSStagingAcceptAllCertificates web.config key).
Files served by Azure Blob Storage caused security warnings when accessed with the HTTPS protocol. If you access files in the Blob Storage with the HTTPS protocol, resave the files.
When regenerating product variants using the 'RegenerateSKUNameAndNumber' method, the name of the regenerated variant consisted of the name and SKU number of the original variant instead of the parent product.
If the 'ShoppingCartItemSelector' control was used outside of an ASCX transformation (for example in the markup of a web part), the generated output incorrectly contained the "Add to cart" button in addition to the link in certain cases.
Images added on the 'Web' tab of the 'Insert image or media' dialog were saved with inline dimensions. This happened even if the width and height set in the dialog did not differ from the original.
Activities weren't tracked for contacts imported using the Contact import functionality.
When the System -> General tab was opened concurrently from multiple clients in an environment with autogenerated web farm servers, an error could occur in the system. The error stated that a web farm server with the same code name already exists.
When the 'Use language prefix for URLs' setting key was enabled, the 'AbsoluteURL' and 'RelativeURL' TreeNode properties did not return a language prefix. This affected both the API and the 'GetAbsoluteURL' transformation method, which always returned a URL with the default language prefix.
The 'Live URL' on the 'Properties -> General' tab of a CMS.File page was not generated correctly when using extensionless URLs.
When using staging or export/import to transfer Page types containing fields of the Boolean type, default values of the fields didn't work correctly on the target instance in certain cases. The default value was displayed in the field editor, but wasn't applied correctly to the field values of pages.
Performing actions such as Delete, Approve, Edit or Reject on a message using the Message board web part did not work correctly. The system performed the action on the wrong message in special cases.
Filtering contacts in a marketing automation process could result in an error when using the 'Is empty' and 'Is not empty' conditions.
If product variants were created for a product that was contained in a bundle, the product wasn't removed from the bundle, but was only hidden. If the bundle was purchased, the given product was still added to the shopping cart and then purchased.
If an SKU was cloned, the new SKU was automatically assigned to bundles which contained the original SKU.
Updating a page attachment stored on the file system after adding it using 'DocumentHelper' led to a 'cannot access file' error. This was caused by the API not manipulating the binary data correctly.
When a search in the application list yielded no results and the user pressed the 'Tab', 'Shift+Tab' or arrow keys, the system logged an error.
When a new field with a very large Size was added to a page type, synchronization of the resulting 'Update page type' staging task failed in special cases (an SQL error occurred on the target server's database).
When selecting applications for the 'Default dashboard' of roles, the search didn't filter applications correctly. The search text was compared against the keys of the resource strings storing the application names instead of the actual names localized into the current UI culture.
Blog post subscription confirmation emails contained incorrect links if the pages linked by 'BlogLink' and 'BlogPostLink' macros had the 'Pages -> URLs -> Path or pattern' property set.
The XHTML output filter added an extra HTML tag to pages that contained the 'xml:lang' attribute. This caused the resulting HTML code to be invalid.
Moving a non-default culture version of a page in the content tree updated 'Name path' only only for the current language version of the page and its children. Other language versions did not have their 'Name path' updated correctly.
Using a conditional layout on a page template could cause an error on the Design tab of the Pages application.
Google and Bing map web parts did not work properly if accessed over the HTTPS protocol.
The system counted hits and conversions for URLs explicitly excluded in the 'Excluded URLs' setting.
Some of the Silverlight support report tables (in the Day, Week, and Year reports) displayed data from an incorrect time period.
Alternative forms assigned to the 'Custom registration form' web part always used the default form layout, even if a custom layout was defined. The problem occurred after applying hotfix 8.2.23.
When attempting to synchronize setting categories belonging to a module that didn't exist on the target instance, staging tasks finished without reporting an error, even though the category wasn't created on the target. In such cases, the synchronization now fails with an appropriate error.
If the 'StopProcessing' property of the 'BizForm' control was set to true in custom code during one of the control's events, the resulting form didn't display error or validation messages (for example when developing a customized 'On-line form' web part).
When creating custom ASCX form layouts, setting the 'DisplayColon' property of <cms:FormLabel> controls had no effect.
Performing workflow actions with comment on a page did not save the changes made on the page.
The system logged an exception into the event log when a Not found (404) response status was returned by a custom service (for example a Web API).
If the 'Custom registration form' web part was used with an alternative form that had the default value of the 'UserEnabled' field set to true, and the 'Registration requires email confirmation' setting was enabled for the site, the registration confirmation links didn't work. New users were incorrectly informed that their registration had already been confirmed.
When building custom UI templates containing the 'Selector' UI web part, the 'Where condition' property of the web part wasn't applied when loading the list of objects available for selection.
'Index was outside the bounds of the array.' exceptions occasionally occurred when accessing files on Azure Blob Storage or Amazon S3 storage, which could lead to very high CPU usage.
Macros that contained quotes and were stored in form layouts became invalid after resigning macros in the system. An error then occurred when resolving the expression.
Permanent URLs available from the 'Page properties -> General' required user authentication.
Users were not able to create a transformation that would hide the 'next page' button displayed by the 'Universal pager' web part when there were no next pages. This was caused by an incorrectly resolved 'PageURL' property.
An incorrect site context check caused the 'PollInfoProvider.GetPolls()' to throw an exception when called.
Performing actions such as delete, approve, edit or reject on a blog comment using the Comment view web part did not work correctly. The system could perform the action on the wrong comment in special cases.
Scrollbar did not appear in the 'Insert image or media' dialog on smaller resolutions when selecting an image directly in the content tree.
Adding of bindings between objects wasn't synchronized correctly among web farm servers in certain cases (for example when adding a product to the buy condition of a "Buy X Get Y" discount).
The system did not log conversions configured on 'CMS.File' pages when the files were downloaded by visitors.
If a user without the Global administrator privilege level opened the 'Advanced export' dialog from within certain other dialogs, the 'Export' and 'Preview' buttons weren't visible. For example, the problem occurred in the dialog used to view the opened emails of an email campaign.
An exception was logged in the 'Preview mode' of the Pages application by the 'PortalManagerHeaderPanelBreadcrumbsExtender' UI control. This happened in a scenario that required a specific configuration of user permissions, UI personalization, ACLs, and disabled device profiles.
Orders created with Kentico 7 generated taxes incorrectly in notifications and invoices after upgrading the system to Kentico 8 or after importing the orders to Kentico 8.
When making an order, the Payment form web part wasn't loaded for automatically registered new customers. This hotfix retracts the related change from hotfix 8.2.8. If a customer logs out just before payment of an order, users with the same session and URL can access the payment form. The order can only be finished by other users if customer credit is used as the payment method.
Bar chart graphs in reports were always oriented vertically even when configured to use horizontal orientation.
When a page contained the same page alias for different language versions, the system always redirected visitors accessing the alias to the language version that was first in the list of aliases.
The 'GetDocumentUrl' transformation method returned language prefixed URLs from non-prefixed sites. This occurred when using the transformation method on a site that had the 'Use language prefix for URLs' setting enabled while retrieving URLs from a different site that had the setting disabled.
When the 'Media libraries folder' setting was set to '~/', the system incorrectly created a folder named '~'.
When a product used multiple option categories but only one option category had product variants, even unavailable options were present in the product option selector.
Publishing a page that was previously renamed on the 'Form' tab and then had its workflow removed caused an error. This happened if the 'Automatically update page alias' setting was enabled when renaming the page.
Users could unlock their accounts using the confirmation link they received when registering on the site. This caused issues when the user was intentionally locked by the system, for example for exceeding the limit of invalid logon attempts or when locked manually by the administrator.
Clicking the Countries application link in the breadcrumbs caused duplication of the header.
When posting a tweet using the 'Twitter tweet button' web part, the resulting tweet could contain '+' signs in place of spaces. This happened when the web part was configured to contain a multiple word 'Default tweet text' and had the 'Use HTML 5 version' setting disabled.
Page crawler smart search indexes that were configured to use the 'public' user account failed to index content.
Report subscription emails were sent with incorrect data in rare cases.
Requests containing invalid characters in their URL could cause unhandled exceptions, for example when processing requests generated by bots. After applying the hotfix, the system responds to such requests by returning the 'Page not found' page (if one is configured for the site).
If a default value of the 'AddressCountryID' field was set in an alternative form of the Address class in the E-commerce module, custom shipping carrier providers calculated with the default country and not with the selected country.
If the first order status after creating an order was configured to send a notification email, the email didn't contain any information about the order, for example purchased products or the subtotal.
When a page was in a workflow step other than 'Edit' and an attachment was uploaded on the 'Form' tab, the header actions and content tree icon were not updated accordingly.
Kentico Web API controllers were not hidden in Web API documentation generated by the 'ASP.NET Web API Help Page' tool.
Web analytics data generated using SQL parallelism was displayed incorrectly in certain graphs. The way the data was plotted could change every time the graph was displayed.
When using Windows Active Directory authentication, an unnecessary 'Add user to site' staging task was created every time a user logged in.
The system now handles temporary unavailability of the Twitter API better when the 'Twitter insights collection' scheduled task is executed. The error logging interval was increased.
Contact filtering on the 'Contacts' tab of Marketing automation processes did not work correctly when using a separated on-line marketing database. We removed the functionality for cases when the application is configured to use a separated database.
The Import and Export wizard interface didn't display messages for errors or warning that occurred during the process. Additionally, the wizard didn't provide a way to finish the import of a site if the system contained another site running on the same domain.
Importing products using Kentico Import Toolkit from the Windows command line created pages separately from SKUs. The pages then weren't connected to the SKUs.
Alternative forms of classes were not imported correctly if the export package came from an instance of the same version. The problem could cause duplicates of custom fields within the Product editing interface.
An error could occur when sending an email containing two links that targeted the same URL.
Customer company information changed during the checkout process wasn't saved to the database. The system kept using the old information.
The AD Import utility didn't update the data of existing users when the 'Update data of existing users and roles' option was selected.
When a pie chart contained values aggregated into the 'Others' slice, the colors of the chart slices did not match the value representations displayed in the chart legend.
Accessing the PortalTemplate.aspx page while having a page in another window open on the 'Page tab' caused an error.
When conversions on different sites shared the same code name, the 'Count' value was displayed incorrectly. This affected only the listing on the 'Conversions' tab of conversions, graphs were not affected.
On-site editing caused an error when the application had tracing enabled. This was caused by the on-site editing toolbar using duplicate IDs for certain buttons.
Categories were incorrectly assigned to pages when you first selected the parent category and then a child category before saving.
If a page contained the 'Comment view' web part, adding of widgets onto the page didn't work due to JavaScript collisions.
When developing custom user interface pages with manually defined header action buttons, header actions whose assigned 'BaseButton' wasn't an instance of the 'FormSubmitButton' control were always hidden after a postback occurred on the page.
Web font icons weren't displayed correctly in the administration interface in certain environments. The '<meta charset="utf-8">' head tag was added to all administration interface pages to resolve the issue.
When a translated page field exceeded the maximum allowed length set for the field, an incorrect error message was shown.
Synchronization timed out for staging tasks that were overwritten at least once (when an object or page was changed multiple times before the synchronization occurred), if more than one target server was configured for the given instance.
Unpublished pages were accessible on the Live site through the 'PortalTemplate.aspx' page. Changes were made so that pages are no longer accessible through this page.
After upgrading to Kentico 8.2, the 'Send email campaign' marketing automation action could retain the no longer correct name 'Send newsletter issue'.
The macro engine evaluated certain operators in the wrong order, which caused incorrect results for expressions containing more than two operands (for example subtraction or division of numbers).
The macro autocomplete help dialog was positioned incorrectly within the code editor.
Resource strings were not localized correctly when used in editable regions on ASPX page templates. The default culture translation was shown in all cases.
The 'Discount coupon' web part accepted coupons even when the discount rules weren't satisfied because the web part incorrectly calculated with the total price including shipping costs.
When using the 'SqlEvents.ExecuteQuery.After' system event to post-process queries, the 'Result' property of the handler's ExecuteQueryEventArgs argument was always null.
When importing from an Active Directory domain with a complex structure, the user/group selection steps of the import utility stopped working in certain cases (infinite loading).
The functionality for generating web farm servers automatically on application start did not work properly. This issue was accidentally introduced in hotfix 8.2.12.
The 'Update statuses' action in the Translations application could lead to an infinite loop when a large number of cancelled submission items existed in the Translations.com system.
The filter in the Outdated pages application was producing a WHERE condition that was compatible only with the 'en-us' UI culture.
Emails based on 'HTML formatted text' sent via the ‘Send email’ Marketing automation step had their HTML version copied to the plain text version. This resulted in recipients seeing HTML tags in the email when viewing it as plain text.
When saving fields using the 'URL selector', 'Rich text editor' or 'BBcode editor' form control, the 'Dialog configuration' properties of the control weren't stored correctly and reset to their default values.
Events that occurred during the processing of requests with a URL longer than 2000 characters weren't logged correctly, which caused errors in the event log.
<p>Sending a linked page for translation via a translation service resulted in an error.</p>
If a custom <object> tag was added into content via the source mode of the editor, the embedded object wasn't displayed correctly. The tag was incorrectly transformed into an invalid expression.
Listing web parts with a macro data source didn't correctly detect whether the data source was empty. If the connected macro data source didn't provide any results, the web parts weren't hidden when the 'Hide if no record found' property was enabled and didn't display the 'No record found text'.
The 'File download' Web analytics reports did not order the records correctly. This caused the system to display incorrect records if the graph was configured for a longer time period.
UI personalization settings weren't evaluated correctly for roles that were automatically imported from Active Directory (on applications using Windows authentication).
Pages that contained any of the Windows reserved characters (" < > | ? *) in their name could not be sent for translation via the Translations.com service.
Deleting a Twitter post directly on Twitter, instead of using Kentico, caused inconsistencies. This resulted in an error being logged in the Event log every time the system tried to access the tweet.
The smart search indexed decimal numbers incorrectly, therefore searching in decimal type fields didn't work (for example when using field search syntax).
Smart search indexing error 'nDocs must be > 0' was logged into the Event log after moving a page into another location. This occurred on sites that had a search index set to a path that did not contain any pages.
Retrieving pages from two sites via the REST service after an application restart returned incorrect data. This occurred if the second site used page types that were not assigned to the first site.
When using workflow and the 'Combine with default culture' setting, the content displayed by listing web parts that were set to retrieve only specific 'Columns' was not displayed correctly in the administration interface. Already published content was displayed instead of the last edited version.
When the system ran in the context of an external utility (such as the AD Import), the application start and application end events were unnecessarily logged into the Event log. To resolve the issue, you need to hotfix the Kentico setup files.
Form fields using the 'Upload file' form control had an incorrect value in the 'for' attribute of the label element in the field's output code.
An error occurred in certain cases when manually clearing the system's cache (for example in the 'Debug -> Cache items' interface).
The 'DISTINCT' and 'GROUP BY' parametrization of DocumentQuery could lead to an incorrectly generated query text which could not be executed against the database.
Calling the 'CMS.Helpers.ObjectParameters.MergeWith()' method resulted in an exception.
When importing users and roles via the AD Import utility, the system unnecessarily logged 'Add user to role' records to the Event log for each role, even if the user was already assigned to the given role before the import.
The Universal viewer with custom query web part always displayed 'Edit' and 'Delete' buttons when a hierarchical transformation was used.
When the application pool was in Classic mode, some of the Strands recommender tracking scripts were not rendered on the live site.
The 'Allow file system cache' setting on the Properties -> General tab in the Pages application did not save its value correctly.
The Live URL, Permanent URL and Preview URL links on the 'Properties -> General' tab in the Pages application didn't work for users without the 'Modify' permission for the 'Content' module.
The 'Clone' action was not available in the the Forms application.
Users in custom roles that clicked 'Select displayed fields' in the Forms application received an 'Access denied' error despite having the correct permissions and UI personalization settings.
Files were not uploaded to the destination specified in the 'Path' property set in the 'Media gallery' web part.
The macro condition builder in marketing automation steps offered unsupported, context-dependent macro rules.
The email notifications sent to users as a result of the "Generate new password" action on the Password tab of the Users application used an incorrect culture context. After applying the hotfix, the emails are always sent according to the given user's 'Preferred content culture' setting.
Deadlocks could occur in rare cases if the site came under heavy load during the initialization of the application (for example after the application pool was recycled), which made the site unavailable for other requests.
Form controls that contained macros in their Editing control settings weren't reloaded correctly for fields that depended on other fields in the form. The fix ensures that the following form controls update correctly: Drop-down list, List box, Radio buttons, Multiple choice, Uni selector, Country selector, Calendar, Check box, and Text box.
Validation failed when saving date and time fields with special default values, such as ##NOW## or ##TODAY##. This prevented the field from being saved in the field editor.
When the caption of a field category contained a localized value (resource string), changes made to the text directly in the field editor weren't saved.
Customer's preferred shipping option and payment option were saved incorrectly. Wrong shipping option and payment option were then pre-filled when the customer made another order.
Email notifications with an invoice displayed the shipping costs without an applied free shipping offer if the free shipping offer was no longer running at the time when the notification was generated.
A 'RESOLVEDATAMACRO' error was logged in the Event log application when a Googlebot visited a personalized page. This happened when the personalization variant set up on the page contained a macro such as 'IsInPersona'.
When the 'CSS list menu' web part was configured to show link tooltips ('Render link title' property enabled), the text in the tooltip was HTML encoded twice. As a result, certain symbols from national alphabets were displayed as HTML entities.
When JavaScript was used to log the web analytics, search crawlers could have flooded the event log with errors in certain cases.
The 'UIHelper.GetImagePath()' method had performance issues caused by slow URL validation. For example, this negatively affected the performance of the content tree or the export of page types. Information about invalid image paths is now also logged into the Event log.
In the Pages application, an error occurred when displaying the General or Navigation properties of a page linked from a different site.
On-line forms didn't generate valid HTML content for autoresponder and notification emails. The content editor wasn't configured to wrap the content into the HTML and BODY tags. For changes to take effect, you need to manually resave settings on the 'Email notification' and 'Autoresponder' tabs of your forms in the 'Forms' application.
The list of forms in the 'Forms' application didn't work correctly if paging was used and the site contained forms that were restricted only for authorized roles.
When adding a macro expression via the Insert macro dialog on the Tree tab, expanding the 'CurrentDocument' node did not open the desired subtree and an error was logged into the event log.
The .NET Framework 4.5/4.5.1 option was disabled in the installer on machines where only .NET Framework 4.6 was installed.
When applying a hotfix, the Hotfix utility made unnecessary changes to the project's solution file, which could cause problems with the Visual Studio Version Selector.
When creating a new page which used a required field with a default value that was set as disabled, an error occurred when saving the page.
File type fields using the 'Upload file' or 'Direct uploader' form control didn't display the 'Default error message' text if validation failed for the field (when the field was set as required and the form was saved without selecting a file).
The class responsible for application of the Buy X Get Y discounts, MultiBuyDiscountsApplicator, is now available for customization.
The tree in the Categories application didn't show the last categories in the list when the vertical scroll bar was visible.
When Windows authentication was enabled, a deadlock could occur on the application dashboard in certain cases.
When using impersonation in combination with Windows authentication, the application dashboard didn't display applications correctly.
The 'Referring sites' web analytics statistic incorrectly included traffic from search engines. If the referring URL matches the domain rule of a defined search engine, the traffic source is now only logged in the 'Search engines' statistic.
When using UI personalization, a permission check for a WYSIWYG editor UI element was incorrectly performed on the 'Web' tab of the 'Insert image' dialog.
When a page was sent to the Translations.com service for translation into multiple languages, the system then imported the language versions incorrectly.
When a large number of pending translation submissions was removed from the database, the system could get into an infinite loop after using the 'Update statuses' button. This was caused by the system not sending a confirmation about downloading the resources for the deleted submissions.
When sending multiple pages for translation, words and characters were counted incorrectly if the 'Send all pages within one submission' option was not selected.
When sending a page for translation into multiple languages, JavaScript error was logged in the browser console.
The 'Strands recommendations' web part displayed incorrect product recommendations when a product template was used.
Advanced smart search syntax containing decimal values didn't work in the context of cultures that use a different decimal mark than the period character ('.'). The system incorrectly converted the decimal mark based on the current culture, even though the search syntax requires use of the period character.
Search result pagination wasn't reset correctly when two or more search filters were applied at the same time.
On-line marketing features are now included in the Kentico improvement program. More information at:
When updating the signatures of macros in 'System -> Macros -> Signatures' with a specified 'Old salt' or 'New salt' value, the new macro signatures were invalid.
When exporting files containing special Unicode characters in their name (for example Russian characters), the file names were corrupted in the export package, which resulted in errors during the import.
Setting the 'Cache minutes' property to 0 for map web parts (for example 'Google maps') didn't disable server-side caching of the location resolutions.
If anonymous customers were automatically registered after making an order, their preferred currency, payment method and shipping option weren't saved to the database.
Missing brackets around the 'OR' condition in the query generated by the 'CustomerInfoProvider.GetCustomers(int siteId)' method caused unexpected results when the method was called with extra 'where' conditions.
<p>When using the Free license, web farm related license warnings were registered in the Event log even if the web farm functionality was disabled.</p>
The 'Partial cache minutes' setting in web part properties didn't resolve macro expressions.
When the web analytics logged events without JavaScript, the page views and the aggregated views logged without the page culture.
REST hash authentication didn't work correctly for requests generated by a browser where a user was logged into Kentico. The system incorrectly evaluated the permissions of the logged in user with greater priority than the hash authentication.
If a web part with selected 'Use update panel' property performed a postback, extra CSS styles were appended to the head section, which made pages unresponsive.
Web part zone properties lost macro values after manipulating web parts on the same page (for example after copying and pasting a web part).
When the 'Page placeholder' web part was set to display a specific page on a multilingual web site with the 'Combine with default culture' setting enabled, the default culture version of the selected page wasn't displayed if the page didn't exist in the currently selected culture.
The 'Field is system' setting wasn't set correctly when editing class fields, which could prevent the fields from being updated in the database when importing new versions of classes. This setting differentiates between default module class fields and fields added later to customizable classes of installed modules. The setting's value is now true by default when editing classes in a module that is still in development, but always false (and hidden) when creating fields in a customizable class of an installed module.
An error could occur during the export process if the export data contained objects with more than 1000 records.
NotSupportedException errors could occur in the system when using a separated online marketing database. The problem was caused by DataQuery API calls with filtering via the 'WhereIn' method if the column used for selection was of the string data type and within a table located in the separated database.
When defining 'Data source' options for a field (for example with the 'Drop-down list' form control), the value was lost when saving via the Ctrl+S keyboard shortcut if the browser focus remained inside the text area.
The legacy SKU field mapping on the 'E-commerce' tab of a page type synchronized its fields incorrectly and left the fields empty when the page type had the 'Create SKU automatically when a new page of this type is created' field selected.
If the MIME type of an imported CSV file with contacts was different from the predefined ones, it wasn't possible to import it even though the CSV file was otherwise correct.
An error occurred in the file listings in 'System -> Files -> Attachments' and 'Metafiles' when the "Stored in DB" filter option was set to a different value than "(all)".
The 'GetDateTime' transformation method caused page timeouts (stack overflow) when called with an object parameter. For example: GetDateTime(Eval("MessageInserted"))
When synchronizing pages with multiple aliases, unnecessary search indexing tasks were created on the target instance.
When the system generated a password for a user or customer, the length or number of non-alphanumeric characters in the new password didn't follow the site's password policy.
Certain types of errors that could occur in the REST service weren't logged into the event log.
Certain On-line marketing functionality, for example the 'Subscription approval' web part, used an incorrect contact in special cases. This was caused by the 'ActivityLogHelper.GetContactIDByEmail' method ignoring its second parametr (siteId).
When storing files on an external storage provider (Amazon/Microsoft Azure), requests for a file that was already being loaded could result in an error until the file was fully loaded and cached on the local file system.
When building macro conditions in the Rule designer, the code of macro rules wasn't correctly wrapped into parentheses, which could lead to an incorrect result for conditions containing multiple macro rules.
An unhandled error occurred when exporting or importing custom objects with certain type information configuration options enabled (ImportExportSettings.IsAutomaticallySelected set to true).
The 'Error' CSS class, which the system applies to form fields whose validation fails, was not removed after the field became valid if the form contained at least one other field whose validation failed.
Form controls with a Data source setting for options (such as the Drop-down list) incorrectly filtered out duplicate items when using 'List of options' or 'SQL query' as the data source. Duplicate items are no longer filtered out.
When a page contained multiple watermarked images, the images weren't loaded correctly.
A new virtual method, ShoppingCartInfoProvider.GetCurrentShoppingCartInternal(), was added to allow customization of getting the current shopping cart, for example when you want to use the same shopping cart on more sites.
Deadlock tracking information was written to text files in the App_Data folder even if the 'Settings -> System -> Log to filesystem' setting was disabled.
The application dashboard didn't display the correct content and customization didn't work when using impersonation in combination with Windows authentication.
Dynamic recalculation of contact groups failed if the condition contained certain types of macro rules ("The given key was not present in the dictionary." error was logged into the event log). The scheduled full rebuild of the contact group was not affected.
When using Amazon S3 storage, files larger than 4MB weren't uploaded correctly.
Using user contributions to insert an image attachment on an unpublished page resulted in a 404 error.
Validation of URLs containing an "at" symbol (@) incorrectly failed in certain parts of the system, for example when inserting links in forums.
The Smart Search -> Tasks page was missing some header actions and some data was not displayed correctly.
Old temporary upload files were not removed by the 'Delete old temporary upload files' scheduled task when the scheduled tasks were set to be run by the external Scheduler Windows service.
When importing new versions of custom modules, the definitions of class fields are not updated correctly in the database in certain cases. To fix the issue, make sure that the 'Field is system' setting is disabled for all class fields before creating the export package. After applying the hotfix, this field setting is now disabled by default for new fields of custom classes.
When using Azure Blob Storage or Amazon S3 Storage, if a visitor closed a page before all media on the page were downloaded, an error was logged into the event log. This could lead to performance problems on pages with high traffic.
Date & time macro methods called in transformations (for example GetCurrentDateTimeString) didn't work correctly if the selected culture of the content was different than the default culture.
Certain components in the administration interface generated controls with the same ID. This caused an error where multiple controls with the same ID were found.
The form control validation method IsValid() was called twice during form validation, which could cause problems in certain scenarios.
Email A/B tests were showing an incorrect winner selection date and time. This happened when the selection was postponed to achieve unambiguous results.
Using the 'Custom table item selector' form control without a set custom table resulted in an error.
When a user registered using the 'Custom registration form' web part, the user's contact was not updated with the filled in information.
Filtering of contacts and accounts according to the 'Country', 'State' or 'Account owner' fields in the Contact management application didn't work correctly when the Online marketing database was separated.
The 'GetCacheDependencies' method of the CacheHelper class didn't return default dependencies correctly. The method now returns default dependencies only if the first parameter is null or contains the ##DEFAULT## macro.
URLs of pages with the 'Payment form' web part were accessible also for other users or logged-out users who were then able to display the order ID, the selected payment method and the total price of the order.
The Smart search control called the web analytics service even when web analytics were disabled.
There was an incorrect permission check in the user contributions feature.
It wasn't possible to disable scheduled tasks belonging to modules that weren't installed (validation failed when saving the change due to the non-existing assembly and class). The assembly and class validation no longer applies when saving disabled tasks.
Creating pages with REST API did not allow white spaces in the 'NodeClassID' value.
The 'Display name' of pages types was always localized in the default culture in the New page listing, even if a different culture was selected.
The 'GetCustomDateTime(Now, "TimeZoneCode")' macro method didn't return correct results due to a conversion error (an error was logged when evaluating the macro expression).
Workflow button localization in the User contributions web parts did not work correctly.
When sending emails for campaigns with persona subscribers, mails were sent to contacts in the given persona even if they should have been blocked due to their number of bounced emails.
The Clicked link tracking dialog was not wide enough and a horizontal scroll bar appeared in some cases.
The "How to" link for importing contacts was missing in the help section of the Contact management application.
Success messages weren't displayed when viewing the code editor in full screen mode (for example after successfully saving).
Attachment URLs in an extensionless configuration did not work correctly in the '~/getattachment/<GUID>?foo=bar' format, where the attachment's file name is not included.
Statistics of different language versions of a page were not merged correctly when one of the culture versions of the page was deleted.
An error was logged into the event log when obtaining an Access token for Google+ (via the 'Get' button in Settings -> Social media -> Google+).
Deleting a page type inherited by another page type caused an SQL error. This action now displays a message instead.
Staging tasks for automatically rebuilding dynamically generated contact groups were not synchronized correctly via staging.
The 'Member management path' and 'Member profile path' community settings didn't allow creation of culture-specific links. It is now possible to add the {CultureCode} wildcard into the path values, and then create pages with a URL containing the culture code in the corresponding location.
Culture-specific versions of resource strings stored in the system's resx files weren't loaded if the given resource string was overridden in the database and existed only in the default culture. The default culture version of the string was always returned instead.
Module folders with period characters (".") in their names weren't included in the export process (the export only worked correctly if the periods were replaced by underscores).
When the 'Group registration' web part was set to automatically create a forum for new groups, the new forum was not visible in Groups -> edit the new group -> Forums on the 'Forums' tab. The web part didn't set the 'ForumCommunityGroupID' column to reference the newly created group in the 'Forums_Forum' table.
Events of specific object types (Info.TYPEINFO.Events) were initialized incorrectly in certain cases, which could lead to event handlers being attached and executed multiple times.
The first request after the application start incorrectly executed certain actions in the context of a public (anonymous) user in certain cases, due to an incorrect caching mechanism.
Default values of form fields containing a macro value were resolved according to the default culture, rather than the currently selected culture (could affect forms in the administration interface and on the live site).
Collapsible form field categories didn't work correctly if the form was placed inside an update panel.
The 'Image selection' control could display a wrong path if the image was added on a multilingual website.
When multiple files were attached to a page, updating the second or the next attachment updated the previous attachment instead of the selected one.
Web part properties used to specify SQL columns were parsed incorrectly if the value contained an SQL expression with special characters (for example parentheses).
The error message shown after opening an expired password reset request contained another reset password link that always sent the user to a Kentico system page. The link now points to the URL specified by the 'Security & Membership -> Website logon page URL' setting (if available).
Object listings (UniGrid components) only saved their state (paging, ordering, etc.) if a filter was applied to the list.
When creating a new A/B test page variant, the variant name and alias were not synchronized across servers when using Staging.
Whitespace characters in site domain aliases were not trimmed.
When indexing PDF files with certain types of character sets (stored as page attachments), the smart search text extractor fell into an infinite loop, which prevented the index from rebuilding.
If the the Enter key was pressed while the browser focus was on a 'Smart search filter' web part, the submit action was performed on the first submit button from the top of the page (often a Sign out button).
'NodeAlias' was not trimmed to the maximum allowed length when updating a page which led to an error.
An error occurred during logon page redirection if the page was located on another domain/site and a matching page didn't exist on the current site.
The properties of Info objects that represent multiple object types were resolved incorrectly within macro expressions in some cases (for example SKUInfo, which represents both products and product variants).
When updating the signatures of macros in 'System -> Macros -> Signatures', the interface didn't display a progress log.
Inserting two resource strings into a localizable text box caused an unhandled error. This is not a supported scenario. The system now treats such values as standard text without switching the text box into localized mode.
Localization of content displayed by live tiles on the application dashboard didn't take the user's preferred user interface culture into account.
The code names of alternative forms weren't validated for the presence of period characters ("."), which are not allowed. The system now rejects code names containing periods, and replaces periods with underscores when generating automatic code names.
When editing form controls after applying hotfix 8.2.4, the checkboxes in the 'Control scope' settings were not populated (always displayed as empty).
The 'GetSKUOriginalPrice' transformation method with taxes enabled ('taxes' parameter set to true) returned the original price of a product without taxes if the product was sold for a lower price than the list price.
The class generated for product page types did not inherit from SKUTreeNode.
Partial caching didn't work for projects with deployment mode enabled (virtual files were saved to the file system).
Blog month pages were created in an incorrect language when creating a blog post in a non-default culture.
When using Data query to load fake info objects in unit tests, the result was always an empty data set if the query didn't contain any filtering conditions.
If a discount with coupons and the 'Order is one of the X first orders' rule were used, the count of the used coupons was not increased when the discount was applied.
The 'Date & Time' web part didn't work correctly (even after applying the previous 8.2.3 hotfix).
The list of items in the Macro autocomplete help had incorrect icon indentation when viewed in Internet Explorer 10.
Selectors positioned within the page toolbar (for example Site or Server selectors) were too short and not aligned correctly.
When a site-related object was cloned while staging was enabled for object changes, the system didn't log staging tasks for assigning the new object to the appropriate site.
The social marketing auto post form controls didn't work correctly with linked pages.
When the smart search found more results than limit specified by the 'CMSSearchMaxResultsNumber' web.config key, the pager showed only the limited result count instead of the total result count.
Campaign report graphs displayed incorrect data for campaigns whose name contained 2 byte Unicode characters.
The 'Persona-based Recommendations' web part's cache was not cleared when a page was tagged or untagged with a persona.
The maximum allowed length of a page's 'NodeAlias' wasn't ensured in all cases. This could result in a 'String or binary data would be truncated' error when working with the alias.
When inheritance of the Page tag group was disabled without specifying any tag group on the Metadata tab in the Pages application, the change was not saved.
Certain context-specific macros weren't resolved correctly in emails based on membership email templates.
Occurrences of invalid parameters in macro expressions weren't logged in the event log or while debugging.
Updating of an empty localization macro expression in the display name field of any object caused an error.
The Import and Export wizard interface didn't display an error message if the system had insufficient file system permissions for the import/export temporary folder.
The 'Multiple choice', 'List box' and 'Radio buttons' form controls didn't reload their content when configured to depend on another field. The controls now update their content when a field with depending fields triggers a postback in the form.
The 'Email template selector' form control couldn't be configured to select site-specific templates. The control now has a Site property (in the advanced editing control settings), which can be configured to allow selection of either global email templates or those belonging to the current site.
When creating M:N binding classes with a compound primary key, the 'Reference' settings used to select the related classes weren't available in the field editor for the class's primary key fields.
Orders were not displayed with the correct item prices while editing the order when at least two products with a text or attribute option were ordered.
When multiple threads were running during the initialization of the application, a deadlock could occur in the initialization of the debug settings, making the involved threads stuck.
The web part properties dialog could not be saved in IE 9 for web parts with a text editor property.
The 'Date & time' web part didn't work correctly (displayed no output).
Page culture version statistics were not merged correctly after deleting one of the culture versions.
Resizing the code editor by toggling fit-to-window mode didn't work correctly in certain parts of the interface.
Object listings (UniGrid components) didn't filter data according to the default filter settings on the first load of the page.
Listing filters for numeric fields (double or decimal type) didn't load their saved state correctly in cases where the filter contained an invalid value.
Having a page search index with 0 indexed items could result in an error ("nDocs must be > 0") when indexing pages and their categories. The error caused the given indexing task to fail, which prevented the system from processing subsequent indexing tasks until the failed task was cleared.
When creating online form records via POST requests to the REST service, the operation failed if the request data didn't manually set a value for the 'FormInserted' timestamp field. The field's value is now set automatically based on the creation time.
Fields using the BBCode editor remained disabled after checking a page out in certain cases.
The 'Clear hash tables' button on the 'System objects' tab in the Debug application did not clear hash tables of email campaign A/B tests properly.
Localization macros in form notification emails were always resolved in the default culture.
In some cases one macro's parameters could affect the result of other macro expressions that were resolved within the same context.
"Unlock account" emails sent after the limit for failed logon attempts was exceeded were always in the default culture instead of the culture selected by the user who locked their account.
The log displaying the progress of asynchronous actions (importing subscribers, staging synchronization) stopped working in certain cases after a very large number of actions was performed, even though the actions continued in the background without any problems.
The value of form controls that open dialogs was not saved when using the 'Check in' action.
An error occurred when an in-place localization macro was changed to another value in the field editor.
When the SKUPrice property was specified with more decimal places than the main currency allows, product prices were displayed differently on the product detail page and in the shopping cart content.
The UniSelector control linked bootstrap.min.js onto the page even if not required.
The name of the 'Delete condition' settings category in 'Inactive contacts' settings has changed to 'Delete contacts matching all the following conditions'.
Widget properties were in conflict with system properties in certain cases. This could cause an exception.
The 'Contribution list' web part did not allow creating new items if the page was under a path that contained an underscore (_).
The site selector on the 'On-line users' tab of the Users application did not work.
If the 'UserName' field was visible in an alternative form, but wasn't included in the form's custom layout, 'Custom registration form' web parts using the given alternative form created users with an automatic username instead of the email address inserted via the form.
Deselecting all items in a selection dialog (UniSelector in multiple mode) could bring the selector into an invalid state.
Domain URLs were not redirected to the default page in certain cases.
The listing (UniGrid) filter disappeared when users clicked the 'Search' button while the FilterLimit was set to 0, the filter area did not contain any text, and the grid was empty.
Update staging tasks weren't created when an object was restored from backup. This caused an inconsistency between the staging source and the target servers.
Some social media web parts placed on a page accessed through HTTPS failed to load because they requested content over HTTP.
Reports were not moved to the object recycle bin when their parent category was deleted (even if the recycle bin was enabled for all objects in the settings).
The campaign report graph returned an error if the campaign display name contained a space at the beginning or end.
An incorrect resource string was used for Form validation error messages. This complicated customizations of the error message value.
A dash was removed from the word 'E-mail' in the email validation rule to make it consistent with the rest of the system.
In situations where the result of a macro expression is another macro expression and the 'notrecursive' parameter isn't specified, the resulting macro expression is resolved as well. Inline macro parameters (culture, encode, etc.) were only applied on the first level of the recursion in these cases.
Some of the hash tables used for state and country data weren't cleared when manually clearing the hash table cache (for example, using the 'Clear cache' button in the 'System' application).
It was possible to modify Forum web parts through on-site editing even if the 'On-site management' property in Forum web parts wasn't selected.
If an inherited page type had an alternative form with the 'Make new fields hidden' property enabled, new fields added to the parent page type were still visible by default in the alternative form.
An error occurred when opening the Preview tab of the 'Direct uploader' form control in the Form controls application.
An incorrect 'Send on' date (1/1/0001) was shown when editing an A/B tested email under very rare circumstances.
Approving a campaign subscription created a duplicate contact in certain cases, when the 'Automatically recognize visitors by user agent' and 'Automatically recognize visitors by IP address' settings were enabled.
The 'Select users' dialog in Email campaigns displayed users with no email address.
Email priority was always set to "normal" if an email wasn't sent via the E-mail queue and its sending failed.
Closing the condition builder in the Order discounts application could cause JS errors when using Firefox.
Changes made to the 'Debug SQL connections' or 'Enable detailed macro debug' settings didn't take effect until the application's cache was cleared.
Tiles in the application dashboard were not aligned correctly when zoomed-in in Chrome.
The 'Add stamp' button was not visible in the 'Merge collisions' dialog.
When more than 32 contacts were merged into one another in a chain, the parent contact was not found due to a recursion limit.
Localization macros for chat support request transformations didn't return the correct UI culture.
The default path to the persistent output cache directory obtained through public API contained double backslash, which might have caused issues in certain cases.
When a user changed the avatar in the 'My account' web part and the client cache was enabled, the old avatar was displayed in the browser.
When global or site banned IPs were cached in hash tables, the 'Clear hash tables' button on the 'Debug -> System objects' tab didn't clear the given tables.
The 'DocumentHelper.GetDocument' method returned no results when called with 'NodeSelectionParameters' that did not contain a 'NodeAliasPath'.
The 'YouTube video' web part did not support adding of protocol agnostic Video URLs.
Padding was missing in the 'Groups -> Message boards -> Messages -> Edit' dialog.
Field attachments of a newly created culture version of a page were not synchronized to the target server.
The 'Recalculate time zone' scheduled task logged staging tasks for updating the time zones. The task no longer generates staging tasks, since they were unnecessary - the time zones are automatically recalculated by the same scheduled task on the target server.
Additional site settings that influence page markup now clear the output cache after their value is modified.
The 'LuceneSearchDocument' class was made public to support advanced search customization scenarios.
When saving a new smart search index, a missing directory error was logged into the system's Event log (the error was a false positive).
After deleting a project/task status or task priority, all projects and tasks using the given option were also deleted. The system no longer allows deletion of statuses or priorities that are used by existing projects or tasks.
Strands recommender did not work on pages accessed via HTTPS. The Strands library was referenced via HTTP, which is not allowed on pages loaded via HTTPS.
Fields were not marked as required when creating a new form record on the 'Recorded data' tab of the Forms application.
When using the 'Message board' web part with the 'Show Name field' property disabled, users couldn't post messages due to incorrect validation in certain scenarios.
Some macro rules were not localized correctly. This resulted in incorrect 'Display names' being shown throughout the system.
The reset password URL in the email that users receive after submitting a password reset request on the logon form wasn't resolved correctly in certain cases.
Module usage tracking errors were logged into the event log if certain separable modules were not installed (for example Chat).
The status configured by users for the visibility of their user profile fields wasn't displayed correctly.
When a product stayed in a customer's shopping cart after logging out and the product was then selected in the 'Buy' conditions of a Buy X Get Y discount, an endless loading loop occurred after the particular customer logged in again.
Payment notification emails were sent with information about the previous order status instead of the current one.
It was impossible to set the tax registration ID and the organization ID in the My Account web part since their fields were hidden.
When using more than 2 decimal places and paying with PayPal, the PayPal provider returned the 'Payment failed' status because PayPal rounded the amount.
Preview of an image modified in the Image editor was not refreshed correctly in the 'Insert image' and 'Insert media' dialogs and in the BB code editor preview.
Result messages were not displayed correctly when performing mass actions in the contact management interface.
The toolbar of the code editor was too wide in certain text area fields.
The fullscreen mode of the code editor wasn't stretched to full width.
Permissions were checked incorrectly, which prevented users from cloning personal canned responses when using support chat.
When deleting or disabling a chat room with users in it, the system displayed potentially misleading information to the users. All users are now kicked from the given room and correctly informed about the status.
When clearing the cache in a web farm environment, the system didn't generate web farm synchronization tasks correctly (depending on the scenario, either too many tasks or no correct task).
Users without the 'Manage blogs' permission incorrectly saw the '(all)' option in the 'Blog' drop-down list of 'My blogs' application.
When subscribing to blogs, the double opt-in confirmation email was sent even if the double opt-in feature was disabled.
When using multiple banner rotators on a page, impressions were logged for one of them only.
Paging didn't work correctly for certain viewer web parts (for example the 'Message board viewer') if the 'Paging mode' was set to 'PostBack'.
Page visits were incorrectly logged for the visitor's preferred culture instead of the actual culture of the visited page. This happened when the page did not exist in the visitor's preferred culture.
When using translation services to translate pages under workflow, the content of widgets was not translated correctly. The latest published version of the content was sent for translation instead of the latest edited version.
When deleting page types, the page type's views were not deleted before the page type's table. This could lead to database inconsistencies in certain cases.
Some of the exceptions which can occur during the payment processing were not saved to the event log.
Saving the template of a page as a new page template deleted the original template file when deployment mode was enabled.
The 'Clear cache' button on the General tab of the System application didn't remove all output cache data stored in the file system (when using persistent file system storage for the output cache of pages).
The gravatar image wasn't displayed for users in certain cases (after the user switched to a gravatar from an existing standard avatar).
When editing an avatar, saving the form cleared the original values of the "Make default for" checkboxes, which could result in incorrect default avatar settings.
Widgets were not displaying the latest content version when using workflow and content locking (check-in/check-out).
The 'CMSRepeater' control displayed a full stack trace on the Live site when an error occured.
The 'Disable macros' property was not available in certain web parts.
When manually choosing items in selector dialogs (fields using the Uni selector form control), removing the last selected item caused a validation error, resulting in the following warning: "The selector data is not consistent, please try to refresh the page."
If the URL of a page contained the hash character ('#'), the Facebook logon functionality did not work.
When an authenticated user submitted a form displayed by the 'Custom subscription form' web part, an error occured.
The Smart Search Worker role generated a web farm server on Microsoft Azure, which could cause licensing errors.
Pages containing the 'Message board' web part took a long time to load if no ad-hoc message board was created. This was caused by redundant queries on the database. Moreover, the 'Enable anonymous read' web part property did not have any effect and the 'Allow comments' permission affected only the UI, it was not checked server-side.
When enabled, the 'Store files in file system' could cause an 'UnauthorizedAccessException' exception in the Image editor.
Even when the 'Use site prefix for user names' property in the settings was enabled, the site prefix for usernames wasn't added to customers converted to users in the Customers application.
The 'File path' property in media library web parts did not work correctly.
When using automatic generation of web farm servers, a server sometimes failed to be created after restarting the application pool, which resulted in a missing web farm server in the database.
When the 'Log via JavaScript snippet' setting was enabled for web analytics, the search in the Pages application displayed an error in some cases after opening the Live site in another tab.
When copying a linked page together with child pages between two sites, the children were not copied.
When creating a form-based filter, the 'Number filter' form control didn't work correctly for fields of the 'Floating-point number' and 'Decimal number' types.
Setting the Values separator to a different character than a comma or semicolon (for example the pipe character '|') caused the Uni selector control to work incorrectly.
When adding a new product to an order in the administration interface, the product filter didn't work correctly when filtered by the SKU field.
When the definition of a custom table or an online form field was changed, the hash tables with cached item values were not cleared.
When creating a new page and adding a file attachment that was then re-uploaded before saving, an error occurred after saving the page.
Placing a Repeater web part on a page together with a custom data source resulted in an error when loading the data.
When converting date and time values to UTC from time zones with daylight saving time enabled, the time shift was added incorrectly for dates outside of the current year.
When using external storage for the file system (Azure blob or Amazon S3) and processing of smart search indexing tasks via the 'Execute search tasks' scheduled task, lock errors occurred in certain cases. The errors could prevent the search from working correctly.
When indexing pdf documents, the smart search text extractor fell into an infinite loop in certain cases when processing malformed inline images.
Page REST API did not retrieve pages with Unicode characters in their 'Page alias'.
When cloning report subscriptions, the system didn't validate the email field correctly, allowing the creation of subscriptions with an invalid email address.
When a linked page was renamed, the 'Name path' and the 'URL path' were not updated correctly for child pages of all linked pages and the original page.
When using custom ASCX type layouts for forms, the system ignored values set for the 'CssClass' property of the 'FormSubmit' control.
Ordering of message boards according to the values of specific fields didn't work correctly in the Message boards application. The order wasn't applied to all boards if the list was split into multiple pages. Clause 'ORDER BY ##ORDERBY##' has now been added to the 'getboardlist' query.
When uploading files to a media library, an ampersand character (&) in the file name caused a 'not allowed extension' validation error.
The macro console in the 'System' application generated macros with invalid syntax if an object represented by an index was selected in the macro tree (for example "ObjectTypes.AllObjectTypes.[0]").
When switching between folders on the 'Content' or 'Media library' dialog tabs, the content wasn't displayed correctly if the second or greater page was selected and the new folder didn't have enough content to offer a pager.
Banner images weren't staged when the target server was running in a root directory but the source server was not.
When multiple attachments were uploaded into a page under a workflow, only the last attachment was inserted into correct version of the page.
When using proxy load balancer, Web analytics functionality didn't work if the load balancer had different domain name than the actual website.
Users with disabled accounts were able to request a change of password.
An error message was displayed after deleting a Facebook post through the Kentico administration interface. The problem only occurred if the Facebook app used for the integration was created before a certain date.
The Kentico CMS Scheduler service failed to process certain scheduled tasks because web farm synchronization mechanism failed due to files being locked by another process.
When running certain scheduled tasks (for example 'Check bounced e-mails'), an exception was logged into the event log if the required modules weren't installed. The tasks now show an appropriate message in the last run result.
With the 'Use name path for URL path' setting enabled, copying a page with child pages could cause the copied child pages to have wrong URLs due to an incorrect name path.
Sending of newsletter issues didn't trigger the 'NewsletterEvents.GenerateQueueItems.After' event.
The 'Logon form' web part always sent e-mails (for example password change requests) in the default culture instead of the culture selected for the page.
If the server had the 'Newtonsoft.Json.dll' library with a file version older than 5.0.0 installed in the Global Assembly Cache, a "Method not found" exception occurred on the site.
Fields in custom collapsible categories in product page types were displayed incorrectly.
Smart search did not return any results when using Amazon S3 storage.
After applying hotfix 8.1.11, the Content staging functionality didn't synchronize binary data of media file tasks to the target server.
Adding of inline widgets into editable regions didn't work correctly. The problem occurred if the widget's parent web part contained a custom property with a specific name (for example 'Container').
The 'Lightbox gallery' web part wasn’t rendered correctly if the 'Border size' property was set to 0.
Although the 'Render CSS classes' checkbox was cleared in the 'Web part properties' dialog, the 'CSS list menu' web part still rendered CSS classes (e.g. 'CMSListMenuLI' for LI elements).
When synchronizing a page in the default culture using the 'Staging' application while the 'Automatically update page alias' and 'Use name path for URL path' settings were enabled, the page was synchronized incorrectly in certain cases.
When a page was synchronized via staging and the 'Use name path for URL path' setting was enabled, the page URL path wasn't preserved correctly after synchronization to the target server.
When using the Windows service to execute scheduled tasks for instances with multiple running sites, the tasks could be processed in the context of the wrong site.
Contact replication stopped working if multiple leads on SalesForce had an identical external identifier.
When compression for page output and resources was enabled, the content couldn't be handled correctly by certain types of proxy servers.
An exception was displayed when unsubscribing subscribers on the 'Import subscribers' tab of the Newsletters application if one of the listed subscribers or selected newsletters didn't exist, or if there was no connection between some of the subscribers and the selected newsletters.
Localization macros placed into the validation error messages of online forms via the 'Edit value' dialog weren't resolved in the correct language (the result was always in the default culture instead of the currently selected one).
When using fields based on the 'Drop-down list' form control with an SQL Query Data source, macros inside the query weren't resolved correctly when a postback occurred in the form (for example after a change in the value of a field with other depending fields). The fix only affects the Drop-down list form control.
Users with the 'Global administrator' privilege level couldn't select images using the 'Image selection' form control if their user account wasn't assigned to the given site.
An error occurred if the UniPager control was connected to a listing control with a custom data source implementing the IEnumerable<T> interface (for example a List collection).
When using A/B testing for pages based on ASPX templates, the system always displayed the original page.
The 'IsDocumentOnSelectedPath' transformation method didn't work correctly if the compared pages shared substrings in their alias.
When staging Online forms across multiple servers, the synchronization failed if the form was already present on the target server for all servers except for the first. Changes made in the form builder didn't generate synchronization tasks at all.
When multiple websites were installed on one instance of Kentico, the 'Visitor culture' drop-down list in the site settings also offered cultures which were not assigned to the site.
When loading page data using REST service calls with data limited using the 'columns' parameter, the request resulted in an error if certain required columns weren't loaded. The basic required columns are now loaded automatically.
When creating new objects via the REST service, the system inserted duplicate records for object types that stored child objects within the same database table (for example 'ecommerce.sku' objects).
The page template editing interface incorrectly displayed the Layout tab for templates of the 'ASPX + Portal page' type.
The 'DocumentLastPublished' column value was cleared after moving the page in some cases.
If a page type had a field with a name identical to a reserved SQL keyword, an error occurred when editing pages of the given type.
Single object export of a page type failed when performed immediately after the page type was created.
The output filter resolved relative paths incorrectly for applications installed into the root folder.
An error occurred when performing the separation of the on-line marketing database if the database was hosted on SQL Azure and the project was running on-premise outside of Microsoft Azure.
When multiple email recipients were set in the 'To email' field for E-mail notifications in the Forms application, the notification emails weren't sent.
When using a preview URL, images with spaces in the media library folder name were broken due to the system generating special URLs for them. This resulted in the system not being able to find the image at the generated URL.
When a page collection (TreeNodeCollection class) was accessed in a macro expression (e.g. through the Children property of a page), the permission check was incorrectly performed for the current user viewing the page. Now the permission check uses the context of the user who signed the macro by default. To get a collection of pages filtered according to the permissions of the current user, use the 'WithPermissionsCheck' property of the collection.
If a field using the 'reCAPTCHA' form control was set as required, the validation always failed in the resulting form.
If an error occurred when sending emails from the email queue, the email remained in the sending state in certain cases, blocking the queue (an application restart was required to start the email queue again).
In Marketing automation, 'Update from Data.com' steps didn't work correctly for processes that used an 'Activity performed' type trigger.
The system's dashboard didn't display icons for applications whose icon was set to an image file (instead of a font class) in the corresponding UI element.
After creating a binding field for a module class referencing certain types of object (for example 'Node'), an error occurred on the class's Code tab. The problem occurred if the object type name of the referenced object was different than its class name.
When copying pages including blogs, the system performed the license check after the copy action, which could lead to a number of blogs exceeding the license limitation.
When a workflow process with a timeout step was cloned, the timeout wasn't handled correctly and used the target step of the original workflow instead of the cloned workflow step.
When creating new web parts with the 'Generate the code files' option enabled, the web part's source file was created with an incorrect path to the code behind file (which prevented the web part from working).
After deleting a Facebook post through the Kentico administration interface, the system displayed an error message and the post remained in the listing. However, the post was correctly deleted from Facebook.
If a search produced more results than the maximum allowed number (1000 by default), an error occurred when attempting to view results over the limit. You can set the maximum allowed number of smart search results using the 'CMSSearchMaxResultsNumber' web.config key.
When using language prefixes for URLs, the 'GetDocumentURL()' transformation method always returned page URLs with a culture code prefix, even if a culture alias was set for the given culture.
After opening the 'Properties' subsections via the content tree context menu in the Pages application, the other main tabs ('Page', 'Design', 'Form') displayed a 'Forbidden access' error.
After applying hotfix 8.1.10, links and media from the current site are added as relative when using the ‘Insert link’ or ‘Insert image or media’ dialog in newsletter issues and templates.
The 'Alternate text' property was displayed in the dialog box for selecting images from media libraries used by the 'Media selection' form control, even if the property's value was not saved in the database.
Some marketing automation actions belonging to a process triggered by activity, e.g. the ‘Send newsletter issue’ action, didn’t work under certain conditions.
If the length of the text containing a macro was close to the given field's maximum length, re-saving the value appended the macro signature representation (hash sign), making the macro invalid.
When exporting global objects, the checkboxes for configuring the export options of specific object types lost their state when using the pager or after clicking the selection buttons (All, None, Default).
When creating collections of child objects based on XML data, the system filled the collection with incorrect data (all items were duplicates of the first processed object). For example, this could cause incorrect behavior when creating certain types of objects based on integration bus synchronization tasks, staging tasks or the data of REST requests.
An error occurred when users attempted to display a forum post in the tree layout of a forum.
Status of existing scheduled order discounts were displayed as incomplete after applying the 8.1.8 hotfix.
Access to Campaign reports was denied for users without the administrator privilege level if UI personalization was enabled.
After selecting a new image for a persona, the image wasn't displayed correctly until the persona was saved.
If a subscriber was deleted after the newsletter emails were sent out, links in the email content didn't work if the 'Track clicked links' feature was enabled for the newsletter.
In certain cases, users couldn’t cancel their subscription when using the ‘Message board unsubscription’ web part to unsubscribe from a message board.
Parsing of invalid macro expressions caused a stack overflow in special cases. For example, problems could occur while writing expressions in an interface with macro autocomplete help.
When importing resource strings using the Import toolkit, the strings were created without text values (translations), even if the 'TranslationText' field was correctly mapped to the source data.
Starting or restarting the application for sites under heavy load caused errors in certain cases ("Index was outside the bounds of the array" exception).
The object and document event categories (ObjectEvents, DocumentEvents, and TYPEINFO.Events) now provide the GetData event, which occurs before or after the system retrieves object/page data using the ObjectQuery/DocumentQuery API. The event allows developers to retrieve external data or edit the data loaded from Kentico.
The field editor displayed an incorrect field data type if the correct type wasn't available in the given context (for example when editing alternative forms).
The CurrentShoppingCart.ShoppingCartCustomer macro returned a NULL value instead of a correct object for anonymous users.
When editing an external image placed into an editable region using the 'Insert image or media' dialog, the image preview wasn't displayed correctly.
Access to the 'Custom table data' application wasn't checked correctly if no custom tables existed in the system.
The Categories application didn't show scrollbars on the Categories and Pages tabs if the displayed items reached the end of the page.
The 'Media gallery' web part didn't display the 'System settings' category in its properties.
When a user without the administrator privilege level viewed an 'Object listing' administration interface page, an error occurred if the UniGrid definition assigned to UI element didn't contain an <actions> section.
When loading multiple objects using the REST service, the 'where' URL parameter didn't work correctly with values containing the IS NULL expression, if the user account used for authorization wasn't a global administrator.
Newsletter click tracking links were generated incorrectly under certain circumstances.
Macros requiring the request context didn’t work in certain Online marketing scenarios, for example Marketing automation.
The "Hashtable insert failed. Load factor too high." error occurred on sites under heavy load in special cases.
Object type collections weren't thread-safe, which could cause errors in various situations on sites under heavy load (for example the "Index was outside the bounds of the array" error).
When defining the fields of custom module classes, changing the field type to or from 'Binary' caused type cast errors. It was also possible to flag existing Binary type fields as Required, resulting in an error.
When using a drop-down list in product option selectors, the certain currency symbols were encoded incorrectly.
Order details in the Orders application displayed incorrect discount rates when the default UI culture was set to a culture using a comma as a decimal mark.
When a tax class was assigned to a shipping option, the relationship was not saved.
Certain features on the system dashboard are not available when using Windows 7 or Windows Server 2008R2 without the service pack installed. In such cases, Kentico now logs a message into the 'Event log', informing about the need to apply the service pack to the operating system.
Overriding the 'selectall' query for a custom table didn't affect the records displayed on the 'Data' tab in the 'Custom tables' application.
The code behind paths of the default UI web parts weren't set to root-relative values, which caused errors when creating custom layouts for these web parts.
Validation of transformation names was too restrictive, which prevented some of the default transformations from being saved. Hyphens are now allowed in transformation names.
Form controls whose file path wasn't specified as an application root-relative path didn't work when assigned as the editing control of settings (in the Modules application).
When running in a web farm environment, updates of search indexes were performed only on one server, without updating the indexes on the other servers.
The URLs in the <atom:link> elements generated by RSS feed web parts weren't encoded correctly. This could lead to incorrect links, for example if the URL of the RSS feed contained query string parameters.
If CSS styles were added for a 'UI' type web part, they weren't linked on administration interface pages containing instances of the given web part.
Deleting a page template which was used as a default template for a page type resulted in the page type being deleted as well.
When creating new page types, it was possible to select page types without custom fields (containers) to inherit from.
Maximum length validation always failed for decimal type fields during the import. Additionally, importing objects with a decimal field from a file source failed due to an invalid type cast.
The provided forum layouts didn't handle time zone settings correctly.
Fields using the 'Numeric up/down' form control caused the editing form to report unsaved changes incorrectly in the administration interface. The system reported unsaved changes when navigating away from the page, even if no changes were made to the form's data.
When inserting content on the Web tab of the 'Insert image or media dialog', the properties section of the page didn't display a scrollbar if required.
When generating classes for page types, custom tables, forms or custom module classes, the code generator created invalid code if the object contained a field with a macro in the default value.
Automated tests inheriting from any of the CMS.Tests base classes failed when located outside of the Kentico solution folder (CMS).
The 'Windows LiveID' web part didn't display the sign out button for users logged in using their Microsoft account.
If a data source component with a filter and pager was used on a page, the pager failed to reset to the page 1 after applying a different filter. This resulted in empty pages being displayed.
With the 'Use site prefix for user names' setting enabled, global administrators couldn't create new site-specific users with a prefix in the 'Users' application. Global administrators can now add users with the prefix of the current site by selecting the 'assign to site' checkbox in the user creation form.
Error messages with very long text overflowed out of the message box in the administration interface.
The reCAPTCHA validator didn't work correctly on pages containing multiple components with reCAPTCHA support (if the first reCAPTCHA instance was hidden, other instances on the page weren't initialized correctly).
When inheriting from certain page types, the user interface didn't show that the created page type was inherited, even though it was inherited properly. This was caused by an incorrect site binding check.
The 'Email notification' and 'Autoresponder' features of forms generated invalid emails into the email queue if the recipient's email address wasn't valid, resulting in a blocked sending queue.
The equality operator in macro expressions (and Equals macro method) didn't work correctly for decimal numbers. An error occurred when resolving such expressions in certain cases.
When using certain license editions, workflow actions weren't allowed for 'Blog post' type pages under workflow with the 'Use check-in/check-out' setting enabled. The system displayed an error message informing about an exceeded license limit for the 'Social marketing' feature.
The unsubscription link contained in the email the user received after subscribing to a forum post didn't work.
Fields using the 'Encrypted password' form control lost their value when a postback occurred in the editing form (for example when editing SMTP server settings).
Using Kentico debugs in a web farm environment with database notification synchronization caused very high memory consumption. The debug memory usage was optimized.
When creating blog posts, automatically created blog months were created incorrectly if the database culture was different than English.
The 'Check out', 'Reset' and 'Archive' buttons were disabled in Internet Explorer 9 when using workflows with the check in and check out functionality.
Errors could occur when viewing online users while another user logged in. This only happened when online users weren't stored in the database.
When a new global contact was automatically created, it could get incorrectly assigned to a persona.
When system retrieved a 'PageInfo' for a certain page, a wrong value was returned by the database in certain cases.
The visual inheritance of page templates used for a page was not interpreted correctly on the Page and Design tab. This happened on pages that were moved to a different section of the content tree.
Port was removed from links in dynamic newsletters.
Dynamic newsletter link text may have been encoded incorrectly if the 'charset' attribute was missing from the HTML page.
Sending newsletter issue to a large amount of contact subscribers may have resulted in some of the e-mails not being processed correctly.
Web farm servers were not automatically removed from the database when scaling down instances on Microsoft Azure.
Media library preselection did not work correctly in the 'URL selector' form control.
KIM, Installer and Hotfix utility allowed to select incomplete Kentico projects and apply hotfixes and upgrades to them.
PDF preview was not resized correctly in Internet Explorer when Device profiles were enabled.
Macros dependent on site context were not evaluated correctly in certain scenarios.
The 'Last update' time displayed when editing smart search indexes wasn't updated correctly when running on Microsoft Azure.
The 'Root page template category' did not work correctly when creating a new page. Users could select from all page template categories if a postback occurred.
When whitespace was inserted before or after a URL in newsletter, and 'Track clicked links' was enabled, the generated tracking link did not work.
Contacts got occasionally stuck in the 'pending' state of an action step.
Localization macros were not correctly resolved in menu web parts and master page header
When running the Hotfix utility from the default location in the Kentico setup file folder, it wasn't possible to hotfix the setup files.
When editing cultures in the 'Localization' application, it was possible to set a 'Culture alias' already used by another culture. Culture aliases must be unique among all cultures.
Removing of CSS styles set for specific objects (such as page templates or web parts) didn't work correctly when storing virtual objects on the file system using Deployment or Source control mode.
The code generated for custom tables and forms had a class name that incorrectly contained the 'Item' suffix.
Links to attached images inserted via the text editor that were used together with extension-less URLs could result in a 404 error message.
Windows LiveID authentication didn't work due to URL changes in the LiveID authentication process.
The 'Select path' control in widgets incorrectly displayed an additional text box and the confirmation message in widgets placed on a widget dashboard was not styled correctly.
Web parts with 'File name' on their General tab starting with a slash sign '/' were not cloned together with their physical file.
The value of 'DataItemIndex' was not resolved correctly if used in a transformation that was processed by the 'ApplyTransformation' macro method.
Staging license requirements weren't checked correctly if staging was enabled for a site on a domain with a sufficient license, but the instance also contained a site using a lower license edition without staging support. A licensing error occurred when working with global objects via the domain name of the site where staging wasn't supported.
When an 'Activity' type score had its 'Validity' set to a value different than 'Until', the radio list was always set to 'Until' when editing the rule again.
Persona rules with a set expiration time did not work correctly due to an incorrect comparison operator in a stored procedure.
The 'Show template selection' option did not work for the 'Page (menu item)' page type.
Certain dialogs opened from the editor toolbar did not have padding.
KIM now creates new remote desktop configuration keys including the thumbprint certificate with every deployment of a project to Microsoft Azure.
Text in the 'Expression' field of the Macro console page was hidden behind a horizontal scroll bar.
The 'Numeric up/down' form control didn't work correctly if the minimum value was set to a high value (a JavaScript error occurred when changing the field's value in the resulting form).
When creating a new order, values from custom fields in the 'AddressInfo' class and the 'ShoppingCartInfo' class were not passed to custom fields in the 'OrderAddressInfo' class.
The 'UniButton' control incorrectly encoded 'LinkURL' properties twice.
Localization macros in Page titles were not correctly resolved.
The 'CurrentContact' cookie was set on every access even if the current value did not differ. This prevented caching of the requests.
When importing users and primary groups, the AD Import failed to assign imported users to roles created for the primary groups.
You can now use the 'CKEditor:DefaultToolbarSet' web.config key to set the default toolbar when using the 'default' option for toolbar sets on a page.
Placing a user widget onto a page that contained a hash tag in its URL did not reload the page.
In a repeater, specifying the 'TopN' or 'SelectTopN' properties to be larger than the available number of pages resulted in an error.
The layout of the preview mode of the Web part containers application user interface was broken.
When a viewer web part had the ‘Combine with default culture’ and ‘Check permissions’ properties enabled, and was set up to ‘Select top N pages’, it received incorrect columns from the database.
An error occurred when saving editing forms with the default values left in date and time fields while using a user interface culture different than English (for example when creating product options). Caused by ##TODAY## and ##NOW## macros in the default field values.
Synchronization of the 'Clone as Ad-hoc template' action via the staging service didn't work correctly for linked pages (the pages on the target server didn't have templates set).
When editing pages of the E-commerce sample site on the Page tab in the Pages application, workflow action buttons had incorrect styling.
The 'where' URL query string parameter wasn't applied for REST requests that loaded data of objects belonging to a specific site.
When sending PUT or POST requests for objects without a GUID column, the response data was invalid. The REST service no longer attempts to include the GUID value in the response data for such objects.
Repeater control placed in a web part did not load its view state data correctly after postback.
When a stylesheet link was added to the <head> section of a Master page, the styles weren't applied when viewing descendant pages in the Pages application.
Using a non-existing field in Page title macros resulted in the whole macro not getting resolved.
Comparison operators in macro expressions returned an incorrect result in special cases if the parameters were two decimal numbers.
When a macro was added into a form field with a required value through the 'Edit value' dialog, the field was considered empty and validation failed (for example when configuring web parts with required properties).
The dialog window used for object selection was too small in certain locations, resulting in an unnecessary scrollbar.
When using SSL (https) and the Google Chrome browser, the system dashboard served the aspx page file for download instead of rendering the web page in some cases.
When using SSL (https) and the Google Chrome browser, adding and removing applications on the system dashboard didn't work correctly.
When importing users and distribution groups (in which the users participate), the AD Import failed to assign the imported users to the roles created for the distribution groups. In special cases, this also caused an error when selecting users and groups in the AD Import wizard.
The AD Import utility always processed all users, even when the 'Import only new users with selected groups' option was enabled, resulting in decreased performance. The import settings now provide a new option - 'Import all users from selected groups and ignore other users', which allows more efficient import of users from specific AD groups.
When editing an unpaid order, the Payment result field on the Billing tab displayed "Method:" until the cache was cleared.
Binary data wasn't processed correctly for requests using the JSON format. For JSON requests, binary data must be submitted as a byte array value.
The Kentico Installer application stopped working if a running installation was canceled by the user.
An error occurred in the Select link dialog if the 'Community' module was removed from the installation.
The widget buttons weren't displayed correctly for the 'Youtube video' widget when editing pages in the Internet Explorer 11 browser.
An unhandled error was displayed when working with custom tables with an insufficient license.
When exporting the data of discounts into Excel/CSV, the Status column was empty.
An error occurred when saving a new global category while having a site selected in the site selector.
The Select available options dialog in product option categories preselected the wrong items.
An error occurred when creating new transformations from the web part properties dialog.
The image preview in the media selection dialog didn't work when running in a Microsoft Azure environment.
An unhandled error was displayed when attempting to save a scheduled task that was previously deleted (for example in a different tab).
After creating a new Workflow action, the system opened the administration interface for Marketing automation actions.
When running in a Microsoft Azure environment, searching for multiple words at once resulted in an error without returning any results.
The 'Data source' setting used when configuring fields that provide a list of selectable options incorrectly allowed users without the ‘Edit SQL Queries’ permission to change the selection from ‘SQL Query’ to another option.
When saving pages based on an ASPX page template containing an editable region, an error occurred in special scenarios.
Using the database updater as the synchronization mechanism for web farms caused very high CPU usage.
When deploying a Windows Azure project using KIM, the tool configured only the ServiceConfiguration.Cloud.cscfg file of the project and not the ServiceConfiguration.Local.cscfg file. This subsequently caused errors in Visual Studio.
When deploying a project using KIM, it is now possible to select and upload a pfx certificate for remote desktop connections to the Azure cloud service.
The height of the header in dialogs containing a selector wasn't calculated correctly.
The 'OutputConvertTablesToDivs' system property of widgets had an incorrect default value in the 'Data source' setting. This prevented making of modifications to the property.
The username field of the 'Logon form' web part was incorrectly prefilled with a user's e-mail address when returning after using the forgotton password functionality.
You can now preserve query string parameters in the Referrals and Referring sites statistics. The 'CMSWebAnalyticsRemoveReferralQuery' web.config key has been added and enables this behavior when set to 'false'.
When a translation via the Translations.com service was requested for a document in a workflow, the generated XLIFF file did not contain the <target> tag.
Using the Eval method in transformations for multiple members separated by dots caused a null reference exception if the data source wasn't a DataTable.
When staging customized UI elements, the UI element type of elements belonging to the affected module was set incorrectly on the target server. This resulted in incorrect behavior in the given parts of the administration interface.
When deleting smart search indexes, the corresponding index files weren't removed if external storage was used as the file system.
The ‘BasicForm.SaveData’ method ignored the ‘redirectUrlAfterSave’ parameter and the specified redirect did not work.
The 'NodeClass', 'NodeACL', 'NodeOwner', 'NodeTemplate', 'NodeWireframeTemplate', and 'NodeLinkedNodeSite' document instance macros were not resolved correctly.
The forgotten password functionality didn't work correctly if the system was configured to not require unique e-mail addresses for user accounts.
Filtering of content in the 'Recycle bin' application didn't work when the user interface culture was set differently than the database culture (usually English).
If a form field using the 'Country selector' form control had a default value set, the value of the field was lost when the resulting form was submitted.
File import did not work on the 'Import from server disk' tab after applying hotfix 8.0.8 or higher.
The verification of the PayPal IPN notification failed when the default UI culture was set to a culture using a comma as a decimal mark.
If a product was under workflow, the 'SKUAvailableItems' and 'SKUTrackInventory' fields weren't updated after modifying the product in the edit form.
Listing mode did not display linked documents correctly. This was caused by MultiDocumentQuery not retrieving all linked documents from the database in certain scenarios.
The 'Generate default transformation' button was missing when creating a new Document type transformation.
The 'Document library' web part didn't display the workflow step information correctly in some cases.
Media dialogs did not list documents with no published version in the right section of the dialog. The documents were visible only in the document tree of the dialog.
The 'Insert image or media' and 'Insert link' dialogs could not be opened when using the latest version of Internet Explorer 11 (11.0.11).
Creating global categories resulted in an error when the 'Allow global categories' setting was disabled.
Image link tags in ASPX templates were incorrectly resolved when the 'Allow CSS minification' setting was enabled.
Publishing a document in the 'Listing mode' could lead to skipping the 'Published step' when the step was situated right after an action step.
A JavaScript error occurred when changing the value of the 'Checkbox' wireframe web part outside of Design mode. Setting the wireframe checkbox value is now only allowed in Design mode.
Nested controls placed in transformations were not working correctly.
The 'Lightbox gallery' web part did not display content correctly and users were not able to switch between the displayed the content.
Translating a large amount of documents at once did not work due to an error in a SQL statement.
The macro report displayed a licensing error page if the results contained a macro whose resolving was limited by the current license.
The 'Custom table item selector' form control didn't work correctly for fields that had the 'Has depending field' setting enabled.
An error occurred when editing or creating e-mail templates in special cases ("The 'rules' start tag does not match the end tag of 'rule'.").
When a user's session expired before submitting an order in the last step of the checkout process, an unhandled error occurred and invoice generating failed.
When a custom field was added to the Order class and the field was displayed only in an alternative form for shipping, the field didn't save its value to the database.
Caching did not take Category names in listing web parts into account. This resulted in incorrect documents being displayed in certain cases.
When using the 'Facebook comments' web part, posted comments were registered to different URLs based on the used protocol, resulting in separate comment threads. To fix the issue, apply the hotfix, import the new version of the web part (Sites -> Import site or objects) and configure your web part instances to enforce a specific URL protocol.
Data from custom tables without the 'ItemOrder' column couldn't be retrieved using REST.
Dots were allowed in on-line form code names. This caused an error when cloning forms that contained dots in their code name.
When importing existing classes (system tables) containing custom fields with changed names (different than the same field on the target instance), the fields were merged incorrectly, resulting in multiple fields in the form definition.
When an administrator disabled a user selected or user preferred currency, the user could still shop with the disabled currency.
When paging was enabled in the "Select available options" dialog on the option categories page of products, the selected options were counted incorrectly under certain circumstances.
When creating a new custom table with the 'Use an existing database table' option, the resulting table didn't work correctly if the original table's primary key column didn't have the 'Identity' property enabled. The system now validates that the Identity property is enabled for the primary key before creating the table.
When automatically merging site contacts into global contacts, the child contact was also merged to another site contact in special cases. This could cause circular referencing of one merged contact to the other, leading to errors and timeouts.
The message 'Popup window was blocked. Please check your browser's settings.' appeared in the Chrome browser even if popup windows were not blocked.
Creating an A/B variant from a page that shared its page template containing an MVT variant with a different page caused an error.
The "Contact is registered for specified event" macro rule (Contact.RegisteredForEvent macro method) didn't work correctly if the on-line marketing database was separated.
The autoresponder emails of forms didn't include attachments on instances configured to store files in the file system.
After upgrading to Kentico 8, all original macro rules were disabled by default. The hotfix enables all macro rules whose 'Enabled' flag wasn't set manually.
The default values of fields weren't included when exporting and importing module classes.
Unhandled threading errors occurred during heavy website load in special cases.
The 'Products data source' web part only worked with the Where condition set up in the Document filter section. The Where condition property in the Products filter section was ignored.
If a product was under a workflow, data in the product UI was taken from the version history table (CMS_VersionHistory), causing inconsistencies in stock information when the product document was published.
When saving documents containing editable regions on the 'Page' tab, an unhandled error (System.ArgumentNullException) occurred in special cases.
Translating a versioned document from another language version caused an error.
The ‘CMSRepeater’ control didn't work correctly if a custom data source was assigned using data of a different type than ‘DataSet’ or ‘DataTable’.
Localization expressions used in web part zone containers were always resolved in the.
|
https://devnet.kentico.com/download/hotfixes
|
CC-MAIN-2019-22
|
refinedweb
| 47,504
| 50.57
|
A Better Way to Build Flash Banners
Flash banner development is often plagued by multiple files, fragmented code and messy timelines. This tutorial will show you how you can create a project template to serve as a solid base which will help you develop robust flash banners quickly, freeing you up to focus on bringing the creative idea to life.
Step 1: Setting Up Project Folder Structure
When building multiple flash banners, good file management is essential to keep your workflow efficient. Start your banner project template by creating the folder structure shown below:
Step 2: Creating A Photoshop Template
Now we are going to create a set of Photoshop templates to use when starting to design a flash banner campaign. (If you don't have Photoshop don't worry, you can skip ahead to Step 5). The Interactive Advertising Bureau maintain Ad Unit Guidelines that list the common dimensions of ad units (banners). For the purposes of this tutorial, we're going to create the three most common:
- The Wide Skyscraper (160px wide x 600px tall)
- The Medium Rectangle (300px wide x 250px tall)
- The Leaderboard (728px x 90px tall)
Let's start by creating the template for The Wide Skyscraper. Open Photoshop and select File > New. Configure the properties for your new Photoshop document as shown below:
Step 3: Adding Layer Groups
To keep the layers in your Photoshop file as structured as your project folders when it's time to produce a banner design, we're going to add Layer Groups in the template file to hold core design elements.
Using New Group from the menu in the top-right corner of the Layers panel create the Layer Group structure shown below:
The 'Background' layer group will hold design elements that are in the background of your banner at all times, for example a color or a texture.
The 'Foreground' layer group will hold design elements that are in the foreground of your banner at all times, such as a company logo.
The 'Frame' layer groups will hold design elements of the key moments in your flash banner sequence or interaction. This could include key messages, critical points of an animation, or user interfaces.
Step 4: Save
Now your template is ready to save. Go to File > Save As and navigate to the 'design' folder in your project folder structure. Name the file according to its dimensions, in this case '160x600' and ensure it's in Photoshop format, with Layers checked.
That's your first fantastic flash banner template created! Repeat these steps for The Medium Rectangle (300px wide x 250px tall) and The Leaderboard (728px x 90px tall). With these Photoshop templates completed, we're ready to move into Flash.
Step 5: Creating Your Flash Project
Let's start by creating a Flash Project so you can navigate your file structure in the Flash IDE. Open Flash CS4 (the process is very similar in Flash CS3 although the Flash interface will differ) and go to Window > Other Panels > Project. In the Project panel, click on the Projects dropdown and select New Project. Specify the Project name as 'BannerTemplate'.
For the Root Folder, navigate to the /banner_template/development/ folder you created in Step 1 using the folder icon. Make sure the ActionScript version is set to ActionScript 3.0 and click Create Project.
Step 6: Flash Project Classes Folder
Now we are going to set the Flash Project properties to help Flash do the work of stubbing out our classes for us. Click the dropdown with the gear icon in the top right corner of the Project panel and select Project Properties. For the 'Save classes in' field, navigate to the /as/ folder you created in Step 1 and click OK.
Step 7: Banner Package Folder
If you performed the last step correctly, you should see little code brackets are now on your /as/ folder icon. We are now going to create a folder for all classes specific to our banner templates. Select the /as/ folder and click the new folder icon in the bottom of the panel. In the dialog box that appears name your folder 'banner' and click OK.
Step 8: Banner Base Document Class
Now (at last!) you are ready to create your banner base document class. If you're not familiar with using document classes (or classes in general), it's a good idea to read this quick tip first.
With the /as/banner/ folder selected, click the Create Class icon in the bottom of the panel. In the Class field add the class name 'Banner' after the package name 'banner.' and click Create Class.
Now we need to take this class stub and modify it to a functional base document class. Add to the ActionScript to reflect the code shown below:
package banner { import flash.display.MovieClip; public class Banner extends MovieClip { // Constants: // Public Properties: // Private Properties: private var config:Object; // Initialization: public function Banner(config:Object = null):void { } // Public Methods: public function init():void { trace("Banner class initialized"); } // Protected Methods: } }
Let's quickly cover the changes we've made to the Banner class:
- Imported the MovieClip class.
- Made the class Banner extend MovieClip (so it can be used as a document class).
- Made the Banner document initialization function receive an optional config Object that we can use to pass in parameters.
- Created a public init() function that outputs a trace when called. The reason why this is handy will be explained when we start to create the banner .FLAs.
Right now this isn't doing much, but the important thing here is to build a class structure that allows us to centralize banner logic, reducing code repetition. From here, we can now extend the Banner class to create our individual banner document classes.
Step 9: Banner Document Classes
Let's start with the class file for The Wide Skyscraper. Create a "WideSkyscraper" class in your Flash project /as/banner/ folder just as you did for the "Banner" class. Take the generated class stub code and add to it so it looks like this:
package banner { public class WideSkyscraper extends Banner { // Constants: // Public Properties: // Private Properties: private var config:Object; // Initialization: public function WideSkyscraper() { super(); } // Public Methods: public override function init():void { trace("WideSkyscraper class initialized"); super.init(); } // Protected Methods: } }
Let's go over the changes we've made to the WideSkyscraper class:
- Made the WideSkyscraper class extend Banner.
- Called the base Banner class document function with super() in the WideSkyscraper document function.
- Overridden the base Banner class init() function with a custom init() function that outputs a trace when called, then calls the Banner class init() function.
Now repeat this step to create the banner document classes for the MediumRectangle and the Leaderboard. With this done, our document class structure is now in place.
Step 10: Creating Your Banner .FLAs
Now we can start to create the .FLA files we need. Again, let's start by creating the template for The Wide Skyscraper (160x600).
Open Flash CS4 and select File > New. Select "Flash File (ActionScript 3.0)" as the Type and click OK. In the Properties panel, edit the Publish and Properties settings as shown below:
Now save your file as "160x600.fla" in the /development/ folder of your project.
Step 11: Setting A Relative Source Path
We're now going to set a relative source path and a relative publish path. This becomes important when you want to make a copy of your banner template project, rename it and start working, or when you want to give the template to someone else. Absolute paths can be a real pain to update (especially across multiple files!) every time you want to start a project.
To set the source path go to File > Publish Settings and click the Flash tab. Now click the Settings button beside the Script dropdown to open the Advanced ActionScript 3.0 Settings window. Make sure Source Path is the active tab and click the '+' to add the './as' path. Now you can add the text 'banner.WideSkyscraper' in the Document Class field. Your window should look like this:
Click OK and your document is now linked to the WideSkyscraper class you created in Step 9.
Step 12: Setting A Relative Publish Path
To set the publish path, go to File > Publish Settings and click the Formats tab. We don't need the HTML file, so uncheck this box. In the publish path for the SWF, target the /www/ folder in your project folder as shown below. If everything looks correct, click OK. Your compiled swf will now be put in the /www/ folder when you preview or publish it.
There's a little more info on this in this Quick Tip screencast.
Step 13: The Main Timeline
For some reason, some ad serving systems require the first frame of your movie to be blank (Eyeblaster is an example of this), or to import their classes/include their ActionScript on the first frame. Often the Flash extensions you can install for these ad serving systems will refuse to package your file if you don't comply with this stipulation.
This is where the init() function you created in your document class earlier comes in. To ensure our template can be used in this situation, we are going to create a two frame timeline with the first frame blank, the second one containing a stop action and a call to the init() function as shown below:
If you compile this file now you should get the following in your Output panel which confirms your WideSkyscraper document class and Banner base document class are working:
WideSkyscraper class initialized Banner class initialized
Step 14: Creating A Library Symbol Class
Now we're going to create a library symbol to hold the banner creative, whether it's an animation or an interface. Go to Insert > New Symbol and give it the name "creative", check Export for ActionScript and give it the class "Creative". Make sure the type is Movie Clip and click OK.
Now add some placeholder text on the stage as shown below so you can see something when you add it to your stage in code later:
And that's all we need from the .FLA file! Go ahead and create the other .FLAs for The Medium Rectangle (300 wide x 250 tall) and The Leaderboard (728 wide x 90 tall). With this in place, we can revisit our Banner document class and start adding core functionality across all these banners.
Step 15: Adding A Background Sprite
Nearly all banner guidelines advise you to place a solid background color in your Flash file as the Stage background color can be overwritten when the Flash object is embedded in an HTML page. Rather than going into every .FLA and drawing a shape on the stage, we can centralize this task in code. Open up your Banner class and update the file to reflect the code shown below:
package banner { import flash.display.MovieClip; import flash.display.Sprite; import flash.display.Graphics; public class Banner extends MovieClip { // Constants: private const BG_COLOR:Number = 0x0E0E0E; // Public Properties: // Private Properties: private var config:Object; // Initialization: public function Banner(config:Object = null):void { } // Public Methods: public function init():void { trace("Banner class initialized"); // Create background var bg:Sprite = new Sprite(); bg.graphics.beginFill(BG_COLOR); bg.graphics.drawRect(0,0,stage.stageWidth,stage.stageHeight); bg.graphics.endFill(); addChild(bg); } // Protected Methods: } }
Let's recap the modifications we've made to the Banner class:
- Imported the Sprite and Graphics classes.
- Added a constant BG_COLOR and assigned to it a hexadecimal value.
- Created a bg sprite and drawn a rectangle with a fill of BG_COLOR that covers our whole stage.
- Added bg to the display list.
Now all you need to do is change the BG_COLOR value to get the right color background in all your banners.
Step 16: Adding to the Display List
Now we need to add the Creative symbol that we created in Step 14 to the display list as this will be the container for the creative execution. This is really easy to do, just update the init() function to this:
//); }
Step 17: Adding A Clickable Area
Another common requirement is for the banner's clickable area to open a new window based on a "clicktag" variable passed in from the HTML page when the Flash object is embedded. Let's create a utility class to handle this for us. In the Flash Project panel navigate to your /as/banner/ folder and create a new subfolder called /util/. Create a new class in here called 'ClickArea' and code this as shown below:
package banner.util { import flash.display.Sprite; import flash.display.Graphics; import flash.display.Stage; import flash.events.MouseEvent; import flash.net.URLRequest; import flash.net.navigateToURL; public class ClickArea extends Sprite { // Private Properties: private var clickthroughURL:String; // Initialization: public function ClickArea(loaderInfo:Object,stage:Stage) { // Create clickable area this.graphics.beginFill(0x00FF00,0); this.graphics.drawRect(0,0,stage.stageWidth,stage.stageHeight); this.graphics.endFill(); // Determine clickthrough URL (by checking known naming conventions) if(loaderInfo.parameters.clicktag != null) { clickthroughURL = loaderInfo.parameters.clicktag; } else if(loaderInfo.parameters.clickTag != null) { clickthroughURL = loaderInfo.parameters.clickTag; } else if(loaderInfo.parameters.clickTAG != null) { clickthroughURL = loaderInfo.parameters.clickTAG; } // Add button behaviour this.buttonMode = true; this.addEventListener(MouseEvent.CLICK, mouseClickHandler, false, 0, true); } // Public Methods: // Protected Methods: private function mouseClickHandler(e:MouseEvent):void { if(clickthroughURL != null) { navigateToURL(new URLRequest(clickthroughURL),"_blank"); } else { trace("Clickthrough"); } } } }
Let's quickly summarize what the ClickArea class is doing:
- Imports the necessary Flash classes.
- Is based on the Sprite class.
- ClickArea's constructor function requires two variables, the loaderInfo Object and the Stage. We will pass these in from our Banner document class.
- Draws a transparent clickable area the width and height of the stage.
- Attempts to get a clickthrough url out of the loaderInfo object and assign it to the clickthroughURL variable.
- Adds behavior on mouse click that launches a clickthroughURL in a new window or outputs a trace if no URL is available. This is handy when testing in the Flash IDE.
Now open up your Banner class again and add import banner.util.ClickArea under your list of Flash class imports and update the init() function to instantiate the ClickArea and add it to the display list as shown below:
//); // Create clickable area var clickArea:ClickArea = new ClickArea(loaderInfo,stage); addChild(clickArea); }
We're adding the basic fundamentals of banner development into this class, but the real value here is that we are adding these to all our banners in one centralized class. Any common tasks you find yourself doing repeatedly in banners can be added in here to free up your time to craft the unique animation or interaction the banner creative has.
Step 18: Publishing Your .FLAs
With all of our code nicely organized, opening the individual .FLAs and publishing them is starting to feel like a hassle. The good news is, we can automate this as well. Go to your Project panel and check the tickbox beside each banner .FLA format (if you can't see them in this list, click on the dropdown with the Gear icon and select Refresh) as shown below:
Now you can publish all of your banners to the /www/ folder you configured in Step 12 by clicking on the dropdown with the Gear icon and selecting Publish Project.
Step 19: HTML Presentation Page
The last element we need to complete to finish our banner project template is creating an HTML page to present them on so they can be shown to a client easily. Download SWFObject and place swfobject.js in the /www/ folder, then create a HTML file in the editor of your choice and write the code shown below:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>Banner Signoff Template</title> <style type="text/css" media="screen"> body { padding:60px; } .banner { padding:0 40px 40px 0; float:left; display:block; } </style> <script src="swfobject.js" type="text/javascript"></script> <script type="text/javascript"> var flashvars = { clickTag: "" } swfobject.embedSWF("160x600.swf", "wide_skyscraper", "160", "600", "9", false, flashvars); swfobject.embedSWF("300x250.swf", "showcase", "300", "250", "9", false, flashvars); swfobject.embedSWF("728x90.swf", "leaderboard", "728", "90", "9", false, flashvars); </script> </head> <body> <div class="banner"><div id="wide_skyscraper"></div></div> <div class="banner"><div id="showcase"></div></div> <div class="banner"><div id="leaderboard"></div></div> </body> </html>
You can read more about how to use SWFObject in the online documentation, but let's quickly cover the key things we're doing here:
- Declaring two css styles to create some space around the page and the individual banners.
- Including swfobject.js, creating a test clickTag to pass in to our banners and writing the swfobject embed statements.
- Defining a div structure and assigning a unique id to a div for SWFObject to dynamically replace with our SWF file.
Now save this file as index.html in the /www/ folder. You can now preview your banners in a web browser or upload this folder somewhere for your client to view:
Step 20: Review Your Project File Structure
Let's finish by reviewing our populated folder structure and ensuring all files are in the appropriate place:
You now have a project template with:
- A set of Photoshop templates to produce the artwork in.
- A set of Flash templates to import library assets into and create timeline animations in.
- A document class structure that allows you to implement functionality in one or all banner formats.
- A way to compile all of your banners at once.
- An HTML page to view all the banners together for yourself and your client.
Conclusion
This tutorial is really only the start. Identify recurring tasks in your banner projects and tailoring your project template to address them to make it speed up your workflow as much as possible. Extend upon it by including your favorite lightweight frameworks (TweenNano is great for scripted animation) and libraries so your favorite tools are at your fingertips when you start your next project.
If you use Subversion or some other kind of source control, this would be a great project to include in your repository so you can improve on it over time and check out the latest revision for each banner project you start.
Got ideas about how this could be improved or comments about issues that hamper your banner development? Join in the discussion below!
Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this postPowered by
|
http://code.tutsplus.com/tutorials/a-better-way-to-build-flash-banners--active-3694
|
CC-MAIN-2015-40
|
refinedweb
| 3,100
| 62.48
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/math/special_functions/jacobi_elliptic.hpp>
namespace boost { namespace math { template <class T> calculated-result-type jacobi_elliptic(T k, T u, T* pcn, T* pdn); template <class T, class Policy> calculated-result-type jacobi_elliptic(T k, T u, T* pcn, T* pdn, const Policy&); }} // namespaces
The function jacobi_elliptic calculates the three copolar Jacobi elliptic functions sn(u, k), cn(u, k) and dn(u, k). The returned value is sn(u, k), and if provided, *pcn is set to cn(u, k), and *pdn is set to dn(u, k).
The functions are defined as follows, given:
The the angle φ is called the amplitude and:
The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more details.
The following graphs illustrate how these functions change as k changes: for small k these are sine waves, while as k tends to 1 they become hyperbolic functions:
These functions are computed using only basic arithmetic operations and trigomometric functions, so there isn't much variation in accuracy over differing platforms. Typically errors are trivially small for small angles, and as is typical for cyclic functions, grow as the angle increases. Note that only results for the widest floating point type on the system are given as narrower types have effectively zero error. All values are relative errors in units of epsilon.
The tests use a mixture of spot test values calculated using the online calculator at functions.wolfram.com, and random test data generated using MPFR at 1000-bit precision and this implementation.
For k > 1 we apply the relations:
Then filter off the special cases:
sn(0, k) = 0 and cn(0, k) = dn(0, k) = 1.
sn(u, 0) = sin(u), cn(u, 0) = cos(u) and dn(u, 0) = 1.
sn(u, 1) = tanh(u), cn(u, 1) = dn(u, 1) = 1 / cosh(u).
And for k4 < ε we have:
Otherwise the values are calculated using the method of arithmetic geometric means.
|
http://www.boost.org/doc/libs/1_52_0_beta1/libs/math/doc/sf_and_dist/html/math_toolkit/special/jacobi/jacobi_elliptic.html
|
CC-MAIN-2016-50
|
refinedweb
| 371
| 50.06
|
Ask the user questions
Note
This topic is pre-release documentation for V4 SDK content and is subject to change. You can find V3 SDK (stable) content here.
At its core, a bot is built around the conversation with a user. Conversation can take many forms; they may be short or may be more complex, may be asking questions or may be answering questions. What shape the conversation takes depends on several factors, but they all involve a conversation.
This tutorial guides you through building up a conversation, from asking a simple question through a multi-turn bot. Our example will be around reserving a table, but you can imagine a bot that does a variety of things through a multi-turn conversation, such as placing an order, answering FAQs, making reservations, and so on.
An interactive chat bot can respond to user input or ask user for specific input. This tutorial will show you how to ask a user a question using the
Prompts library, which is part of
Dialogs. Dialogs can be thought of as the container that defines a conversation structure, and prompts within dialogs is covered more in depth in its own how-to article.
Prerequisite
Code in this tutorial will build on the basic bot you created through the Get Started experience.
Get the package
Install the Microsoft.Bot.Builder.Dialogs package from Nuget packet manager.
Import package to bot
Add reference to both dialogs and prompts in your bot code.
// ... using Microsoft.Bot.Builder.Dialogs; using Microsoft.Bot.Builder.Prompts; // ...
This will give you access to the
DialogSet and
Prompts library that you will use to ask the user questions.
DialogSet is just a collection of dialogs, which we structure in a waterfall pattern. This simply means that one dialog follows another.
Instantiate a dialogs object
Instantiate a
dialogs object. You will use this dialog object to manage the question and answer process.
Declare a member variable in your bot class and initialize it in the constructor for your bot.
public class MyBot : IBot { private readonly DialogSet dialogs; public MyBot() { dialogs = new DialogSet(); } // The rest of the class definition is omitted here }
Define a waterfall dialog
To ask a question, you will need at least a two step waterfall dialog. For this example, you will construct a two step waterfall dialog where the first step asks the user for their name and the second step greets the user by name.
Modify your bot's constructor to add the dialog:(); } }); }
The question is asked using a
textPrompt method that came with the
Prompts library. The
Prompts library offers a set of prompts that allows you to ask users for various types of information. For more information about other prompt types, see Prompt user for input.
For the prompting to work, you will need to add a prompt to the
dialogs object with the dialogId
textPrompt and create it with the
TextPrompt() constructor.(); } }); // add the prompt, of type TextPrompt dialogs.Add("textPrompt", new Builder.Dialogs.TextPrompt()); }
Once the user answers the question, the response can be found in the
args parameter of step 2.
Now that you have defined your
dialogs to ask a question, you need to call on the dialog to start the prompting process.
Start the dialog
Modify your bot logic to something like this:
public async Task OnTurn(ITurnContext context) { // We'll cover state later, in the next tutorial var state = ConversationState<Dictionary<string, object>>.Get(context); var dc = dialogs.CreateContext(context, state); if (context.Activity.Type == ActivityTypes.Message) { await dc.Continue(); if(!context.Responded) { await dc.Begin("greetings"); } } }
Bot logic goes in the
OnTurn() method. Once the user says "Hi" then the bot will start the
greetings dialog. The first step of the
greetings dialog prompts the user for their name. The user will send a reply with their name as a message activity, and the result is send to step two of the waterfall through the
dc.Continue() method. The second step of the waterfall, as you have defined it, will greet the user by their name and ends the dialog.
Define a more complex waterfall dialog
Now that we've covered how a waterfall dialog works and how to build one, let's try a more complex dialog aimed at reserving a table.
To manage the table reservation request, you will need to define a waterfall dialog with four steps. In this conversation, you will also be using a
DatetimePrompt and
NumberPrompt in additional to the
TextPrompt.
Start with the Echo Bot template, and rename your bot to CafeBot. Add a
DialogSet and some static member variables.
namespace CafeBot { public class CafeBot : IBot { private readonly DialogSet dialogs; // Usually, we would save the dialog answers to our state object, which will be covered in a later tutorial. // For purpose of this example, let's use the three static variables to store our reservation information. static DateTime reservationDate; static int partySize; static string reservationName; // the rest of the class definition is omitted here // but is discussed in the rest of this article } }
Then define your
reserveTable dialog. You can add the dialog within the bot class constructor.
public CafeBot() { dialogs = new DialogSet(); // Define our dialog dialogs.Add("reserveTable", new WaterfallStep[] { async (dc, args, next) => { // Prompt for the guest's name. await dc.Context.SendActivity("Welcome to the reservation service."); await dc.Prompt("dateTimePrompt", "Please provide a reservation date and time."); }, async(dc, args, next) => { var dateTimeResult = ((DateTimeResult)args).Resolution.First(); reservationDate = Convert.ToDateTime(dateTimeResult.Value); // Ask for next info await dc.Prompt("partySizePrompt", "How many people are in your party?"); }, async(dc, args, next) => { partySize = (int)args["Value"]; // Ask for next info await dc.Prompt("textPrompt", "Whose name will this be under?"); }, async(dc, args, next) => { reservationName = args["Text"]; string msg = "Reservation confirmed. Reservation details - " + $"\nDate/Time: {reservationDate.ToString()} " + $"\nParty size: {partySize.ToString()} " + $"\nReservation name: {reservationName}"; await dc.Context.SendActivity(msg); await dc.End(); } }); // Add a prompt for the reservation date dialogs.Add("dateTimePrompt", new Microsoft.Bot.Builder.Dialogs.DateTimePrompt(Culture.English)); // Add a prompt for the party size dialogs.Add("partySizePrompt", new Microsoft.Bot.Builder.Dialogs.NumberPrompt<int>(Culture.English)); // Add a prompt for the user's name dialogs.Add("textPrompt", new Microsoft.Bot.Builder.Dialogs.TextPrompt()); }
The conversation flow of the
reserveTable dialog will ask the user 3 questions through the first three steps of the waterfall. Step four processes the answer to the last question and sends the user the reservation confirmation.
Each waterfall step of the
reserveTable dialog uses a prompt to ask the user for information. The following code was used to add the prompts to the dialog set.
dialogs.Add("dateTimePrompt", new Microsoft.Bot.Builder.Dialogs.DateTimePrompt(Culture.English)); dialogs.Add("partySizePrompt", new Microsoft.Bot.Builder.Dialogs.NumberPrompt<int>(Culture.English)); dialogs.Add("textPrompt", new Microsoft.Bot.Builder.Dialogs.TextPrompt());
Now, you are ready to hook this into the bot logic.
Start the dialog
Modify your bot's
OnTurn to contain the following code:
public async Task OnTurn(ITurnContext context) { if (context.Activity.Type == ActivityTypes.Message) { // The type parameter PropertyBag inherits from // Dictionary<string,object> var state = ConversationState<Dictionary<string, object>>.Get(context); var dc = dialogs.CreateContext(context, state); await dc.Continue(); // Additional logic can be added to enter each dialog depending on the message received if(!context.Responded) { if (context.Activity.Text.ToLowerInvariant().Contains("reserve table")) { await dc.Begin("reserveTable"); } else { await context.SendActivity($"You said '{context.Activity.Text}'"); } } } }
In Startup.cs, change the initialization of the ConversationState middleware to use a class deriving from
Dictionary<string,object> instead of
EchoState.
For example, in
Configure():
options.Middleware.Add(new ConversationState<Dictionary<string, object>>(dataStore));
Next steps
In this tutorial, the bot is saving the user's input to variables within our bot. If you want to store or persist this information, you need to add state to the middleware layer. Let's take a closer look at how to persist user state data in the next tutorial.
|
https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-waterfall?view=azure-bot-service-4.0&tabs=cstab
|
CC-MAIN-2018-39
|
refinedweb
| 1,323
| 50.43
|
import "github.com/gogf/gf/g/os/grpool"
Package grpool implements a goroutine reusable pool.
Add pushes a new job to the pool using default goroutine pool. The job will be executed asynchronously.
Jobs returns current job count of default goroutine pool.
Size returns current goroutine count of default goroutine pool.
Goroutine Pool
New creates and returns a new goroutine pool object. The parameter <limit> is used to limit the max goroutine count, which is not limited in default.
Add pushes a new job to the pool. The job will be executed asynchronously.
Cap returns the capacity of the pool. This capacity is defined when pool is created. If it returns -1 means no limit.
Close closes the goroutine pool, which makes all goroutines exit.
IsClosed returns if pool is closed.
Jobs returns current job count of the pool.
Size returns current goroutine count of the pool.
Package grpool imports 3 packages (graph) and is imported by 2 packages. Updated 2019-06-26. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/gogf/gf/g/os/grpool
|
CC-MAIN-2019-30
|
refinedweb
| 169
| 62.44
|
.
was):
Hi there,
I'm very pleased to announce, after 10 years of existence, the 1.0 release of Pylint.
This release has a hell long ChangeLog, thanks to many contributions and to the
10th anniversary sprint we hosted during june. More details about changes below.
Chances are high that your Pylint score will go down with this new release that includes a
lot of new checks :) Also, there are a lot of improvments on the Python 3 side
(notably 3.3 support which was somewhat broken).
You may download and install it from Pypi or from Logilab's debian
repositories. Notice Pylint has been updated to use the new Astroid library
(formerly known as logilab-astng) and that the logilab-common 0.60 library
includes some fixes necessary for using Pylint with Python3 as well as
long-awaited support for namespace packages.
For those interested, below is a comprehensive list of what changed: everyone,
After 10 years of hosting Pylint on our own forge at logilab.org, we've decided to publish version 1.0 and move Pylint and astng development to BitBucket. There has been repository mirrors there for some time, but we intend now to use all BitBucket features, notably Pull Request, to handle various development tasks.
There are several reasons behind this. First, using both BitBucket and our own forge is rather cumbersome, for integrators at least. This is mainly because BitBucket doesn't provide support for Mercurial's changeset evolution feature while our forge relies on it. Second, our forge has several usability drawbacks that make it hard to use for newcomers, and we lack the time to be responsive on this. Finally, we think that our quality-control process, as exposed by our forge, is a bit heavy for such community projects and may keep potential contributors away.
All in all, we hope this will help to have a wider contributor audience as well as more regular maintainers / integrators which are not Logilab employees. And so, bring the best Pylint possible to the Python community!
Logilab.org web pages will be updated to mention this, but kept as there is still valuable information there (eg tickets). We may also keep automatic tests and package building services there.
So, please use as main web site regarding pylint development. Bug reports, feature requests as well as contributions should be done there. The same move will be done for Pylint's underlying library, logilab-astng (). We also wish in this process to move it out of the 'logilab' python package. It may be a good time to give it another name, if you have any idea don't hesitate to express yourself.
Last but not least, remember that Pylint home page may be edited using Mercurial, and that the new is generated using the content found in Pylint source doc subdirectory.
Pylint turning 10 and moving out of its parents is probably a good time to thank Logilab for paying me and some colleagues to create and maintain this project!!).
image!
The!
Hi there!
I'm pleased to announce new releases of pylint and its underlying
library logilab-astng. See and for more info.
Those releases include mostly fixes and a few enhancements. Python 2.6
relative / absolute imports should now work fine and Python 3 support
has been enhanced. There are still two remaining failures in astng
test suite when using python 3, but we're unfortunatly missing
resources to fix them yet.
Many thanks to everyone who contributed to this release by submitting
patches or by participating to the latest bugs day.
Hey guys,
we'll hold the next pylint bug day on july 8th 2011 (friday). If some of you want to come and work with us in our Paris office, you'll be welcome.
You can also join us on jabber / irc: ;).
You.
I recently understood why easy_install wasn't able to find so many of our packages anymore.
The problem was due to a recent change on our website. The project page was ajaxified, and since easy_install uses some screenscrapping techniques to get distribution archives, it can not find the files it is looking for.
To fix this, we should make our tarballs downloadable from PyPI, by using
python setup.py register sdist upload
instead of the current:
python setup.py register
Uploading our public python software packages to PyPI will make them easy_installable in a breeze !
Here is a piece of code I've written which I thought may be useful to some other people...
You'll find here a simple python module to use with the Google AppEngine SDK to monkey patch the datastore API in order to get an idea of the calls performed by your application.
To instrument of the datastore, put at the top level of your handler file
import instrdatastore
Note that it's important to have this before any other import in your application or in the google package to avoid that some modules will use the unpatched version of datastore functions (and hence calls to those functions wouldn't be considered).
Then add at the end of your handler function
instrdatastore.print_info()
The handler file should look like this:
"""my handler file with datastore instrumenting activated"""
import instrdatastore
# ... other initialization code
# main function so this handler module is cached
def main():
from wsgiref.handlers import CGIHandler
from ginco.wsgi.handler import ErudiWSGIApplication
application = ErudiWSGIApplication(config, vreg=vreg)
CGIHandler().run(application)
instrdatastore.print_info()
if __name__ == "__main__":
main()
Now you should see in your logs the number of Get/Put/Delete/Query which has been done during request processing
2008-06-23 06:59:12 - (root) WARNING: datastore access information
2008-06-23 06:59:12 - (root) WARNING: nb Get: 2
2008-06-23 06:59:12 - (root) WARNING: arguments (args, kwargs):
((datastore_types.Key.from_path('EGroup', u'key_users', _app=u'winecellar'),), {})
((datastore_types.Key.from_path('EUser', u'key_test@example.com', _app=u'winecellar'),), {})
2008-06-23 06:59:12 - (root) WARNING: nb Query: 1
2008-06-23 06:59:12 - (root) WARNING: arguments (args, kwargs):
(({'for_user =': None}, 'EProperty'), {})
2008-06-23 06:59:58 - (root) WARNING: nb Put: 1
2008-06-23 06:59:58 - (root) WARNING: arguments (args, kwargs):
(({u'login': None, u'last_usage_time': 1214204398.2022741, u'data': ""},), {})
I'll probably extend this as the time goes. Also notice you may encounter some problems with the automatic reloading feature of the dev app server when instrumentation is activated, in which case you should simply restart the web server.
|
http://www.logilab.org/blog/6056/blogentries/sthenault?vtitle=Blog%20entries%20by%20Sylvain%20Thenault%20%5B27%5D
|
CC-MAIN-2014-15
|
refinedweb
| 1,082
| 62.27
|
1 /* 2 * Fimex, UnitsConverterDecl.h 3 * 4 * (C) Copyright 2019, met.no 5 * 6 * Project Info: 7 * 8 * This library is free software; you can redistribute it and/or modify it 9 * under the terms of the GNU Lesser General Public License as published by 10 * the Free Software Foundation; either version 2.1 of the License, or 11 * (at your option) any later version. 12 * 13 * This library is distributed in the hope that it will be useful, but 14 * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY 15 * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public 16 * License for more details. 17 * 18 * You should have received a copy of the GNU Lesser General Public 19 * License along with this library; if not, write to the Free Software 20 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, 21 * USA. 22 */ 23 #ifndef UNITSCONVERTERDECL_H_ 24 #define UNITSCONVERTERDECL_H_ 25 26 #include <memory> 27 28 namespace MetNoFimex { 29 30 class UnitsConverter; 31 typedef std::shared_ptr<UnitsConverter> UnitsConverter_p; 32 33 } /* namespace MetNoFimex */ 34 35 #endif /* UNITSCONVERTERDECL_H_ */
|
https://fossies.org/linux/fimex/include/fimex/UnitsConverterDecl.h
|
CC-MAIN-2019-51
|
refinedweb
| 183
| 51.58
|
In this codelab, you'll learn about convolutions and why they are so powerful in computer vision scenarios.
In the previous codelab, you created a simple Deep Neural Network (DNN) for computer vision of fashion items. This was limited because it required the clothing item to be the only thing in the picture, and it had to be centered.
Of course, this isn't a realistic scenario. You'll want your DNN to be able to identify the clothing item in pictures with other objects, or where it isn't positioned front and center. To do this, you'll need to use convolutions.
Prerequisites
This codelab builds on work completed in two previous installments, Say hello to the "Hello, World" of machine learning, and Build a computer vision model. Please complete those codelabs before continuing.
What you'll learn
- What are convolutions
- How to create a feature map
- What is pooling
What you'll build
- A feature map of an image
What you'll need
You can find the code for the rest of the codelab running in Colab.
You'll also need TensorFlow installed, and the libraries you installed in the previous codelab.
A convolution is a filter that passes over an image, processes it, and extracts the important features.
Let's say you have an image of a person wearing a sneaker. How would you detect that a sneaker is present in the image? In order for your program to "see" the image as a sneaker, you'll have to extract the important features, and blur the inessential features. This is called feature mapping.
The feature mapping process is theoretically simple. You'll scan every pixel in the image and then look at its neighboring pixels. You multiply the values of those pixels by the equivalent weights in a filter.
For example:
In this case, a 3x3 convolution matrix, or image kernel, is specified.
The current pixel value is 192. You can calculate the value of the new pixel by looking at the neighbor values, multiplying them by the values specified in the filter, and making the new pixel value the final amount.
Now it's time to explore how convolutions work by creating a basic convolution on a 2D grayscale image.
You'll demonstrate that with the ascent image from SciPy. It's a nice built-in picture with lots of angles and lines.
Start by importing some Python libraries and the ascent picture:
import cv2 import numpy as np from scipy import misc i = misc.ascent()
Next, use the Pyplot library
matplotlib to draw the image so that you know what it looks like:
import matplotlib.pyplot as plt plt.grid(False) plt.gray() plt.axis('off') plt.imshow(i) plt.show()
You can see that it's an image of a stairwell. There are lots of features you can try and isolate. For example, there are strong vertical lines.
The image is stored as a NumPy array, so we can create the transformed image by just copying that array. The size_x and size_y variables will hold the dimensions of the image so you can loop over it later.
i_transformed = np.copy(i) size_x = i_transformed.shape[0] size_y = i_transformed.shape[1]
First, make a convolution matrix (or kernel) as a 3x3 array:
# This filter detects edges nicely # It creates a filter, calculate the output pixels. Iterate over the image, leaving a 1-pixel margin, and multiply each of the neighbors of the current pixel by the value defined in the filter.
That means that the current pixel's neighbor above it and to the left of it will be multiplied by the top-left item in the filter. Then, multiply the result by the weight and ensure that the result is in the range 0 through 255.
Finally,, plot the image to see the effect of passing the filter over it:
# Plot the image. Note the size of the axes -- they are 512 by 512 plt.gray() plt.grid(False) plt.imshow(i_transformed) #plt.axis('off') plt.show()
Consider the following filter values and their impact on the image.
Using [-1,0,1,-2,0,2,-1,0,1] gives you a very strong set of vertical lines:
Using [-1,-2,-1,0,0,0,1,2,1] gives you horizontal lines:
Explore different values! Also, try differently sized filters, such as 5x5 or 7x7.
Now that you've identified the essential features of the image, what do you do? How do you use the resulting feature map to classify images?
Similar to convolutions, pooling greatly helps with detecting features. Pooling layers reduce the overall amount of information in an image while maintaining the features that are detected as present.
There are a number of different types of pooling, but you'll use one called Maximum (Max) Pooling.
Iterate over the image and, at each point, consider the pixel and its immediate neighbors to the right, beneath, and right-beneath. Take the largest of those (hence max pooling) and load it into the new image. Thus, the new image will be one-fourth the size of the old.
The following code will show a (2, 2) pooling. Run it to see the output.
You'll see that while the image is one-fourth the size of the original, it kept that plot. The image is now 256x256, one-fourth of its original size, and the detected features have been enhanced despite less data now being in the image.
You've built your first computer vision model! To learn how to further enhance your computer vision models, proceed to Build convolutional neural networks (CNNs) to enhance computer vision.
|
https://codelabs.developers.google.com/codelabs/tensorflow-lab3-convolutions
|
CC-MAIN-2020-50
|
refinedweb
| 941
| 64.41
|
XQuery/HTML Table View
Motivation[edit]
When we have an XML file with uniform tabular structure, with no repeated or compound children, a generic table view is handy. We can use the XML element names as the column headers.
Sequence as XHTML Table[edit]
Here is a function which achieves this. It takes a sequence of elements and creates an HTML table, one row per node in the sequence. It uses a little introspection with the name(function) to get the names of the children of the first node to form the column headings. For each node, the children is accessed by node name so that there is no requirement that all elements are present or in the same order as in the first node.
declare base-uri ""; declare function local:sequence-to-table($seq) { (: assumes all items in $seq have the same simple element structure determined by the structure of the first item :) <table border="1"> <thead> <tr> {for $node in $seq[1]/* return <th>{name($node)}</th> } </tr> </thead> {for $row in $seq return <tr> {for $node in $seq[1]/* let $data := string($row/*[name(.)=name($node)]) return <td>{$data}</td> } </tr> } </table> };
This could then be used to view selected nodes:
local:sequence-to-table(//Emp)
This approach is ideal if you know that the first node in a dataset has all the elements for all the columns in the table.
This approach is used in the later Database example to display computed sequences.
The following line must be added if you are using strict XHTML. This puts all the HTML tags (<table>, <htead>, <th>, <tbody>, <tr> and <td>) in the correct namespace.
declare base-uri "";
Sequence as CSV[edit]
A similar approach can be used to export the sequence as CSV. Here the header Content-Disposition is set so the Browser will allow the generated file to be opened directly in Excel.
declare option exist:serialize "method=text media-type=text/text"; declare variable $sep := ','; declare variable $eol := ' '; declare function local:sequence-to-csv($seq) as xs:string { (: returns a multi-line string of comma delimited strings :) string-join( (string-join($seq[1]/*/name(.),$sep), for $row in $seq return string-join( for $node in $seq[1]/* let $data := string($row/*[name(.)=name($node)]) return if (contains($data,$sep)) then concat('"',$data,'"') else $data , $sep) ),$eol ) }; let $x := response:set-header('Content-Disposition','inline;filename=empdept.csv') return local:sequence-to-csv(//Emp)
|
https://en.wikibooks.org/wiki/XQuery/HTML_Table_View
|
CC-MAIN-2016-50
|
refinedweb
| 407
| 51.07
|
Lua:Tutorials:Scripting with LuaPlus and Cpp
Contents
A small guide to Lua scripting
At some point you would probably like to add some scripting to your game.
For example, think of particle emitters or characters. Everytime you change something, you have to recompile. With decent sized projects this could mean making a cup of tea everytime :). With some kind of scripting you could simply change your script and run your game immediately again. The script has changed, and therefore so has the character, particle emitter, AI, etc.
Scripting is a hot item these days. For example, a game I still like quite a lot -Unreal Tournament- consists for a large part of "UnrealScript." This script controls all kinds of objects throughout the game. Elevators, switches, AI, etc.
Here is a small excerpt from UnrealScript:
simulated final function RandSpin(float spinRate) { DesiredRotation = RotRand(); RotationRate.Yaw = spinRate * 2 *FRand() - spinRate; RotationRate.Pitch = spinRate * 2 *FRand() - spinRate; RotationRate.Roll = spinRate * 2 *FRand() - spinRate; }
Looks quite like normal C++ code, doesn't it?
Lua
I was reading through some programming books and read about Lua. Lua is a completely free scripting language. Even for commercial use. Basically the only thing you have to do is to mention it in a readme, and if you want to be really cool, add the Lua logo to your game.
Lua is based on ANSI C. It should compile on any platform. This means that if you ever try to port your game to linux, you don't need to change scripts. Also, Lua can be compiled or just "plain text". If your code (Script) is plain text, Lua will compile it on the fly when you load it in your game.
If you like to see compile errors, and have a compiled script, simply use the command line utility. You do not have to change any code to load "text" or binary Lua files! So, if you're done developping: Compile the Lua files and replace the text versions, and it should work just fine.
The main site of Lua is. Check it out for some more details.
The Language
Lua looks a bit like all kinds of languages: A mix of VB/C++. Loosely typed, "and" instead of "&&", etc. For a good language reference you might find some tutorials online. Or maybe get yourself a nice book about it. Right now I have less than half a day of experience with Lua, so I don't know everything yet either :).
Here is a small example what I'm using as a test script, called test.lua:
health = 100; PrintNumber(30); function Add(x, y) return math.cos(x); end -- function Update(x,y) -- return x + y; -- end
LuaPlus
There is one thing I forgot to mention. I'm using an alternative version of Lua, called LuaPlus. This version is modified for better support in (V)C++. It can be used in Managed C environments too. It also comes with a debugger and a "QuickWatch" viewer. I haven't tried those yet, though.
Download the source and the binaries. Include the binary ".lib" files in your project (where you add the DirectX or SDL .libs to normally). You also have to include some files from the LuaPlus source directory. Check out the samples that come with LuaPlus to get it working!
LuaPlus is quite a bit more convenient than Lua 'normal'. There is a tutorial about Lua on Gamedev.net.
health = 100;
This means that a global variable "health", with a value of 100 is created immediatly when the script is loaded.
PrintNumber(30);
PrintNumber is a function I have in my C++ program, the code actually calls my C++ function from the script. This opens many possibilites AddBot("John"), "CreateExplosion(10,20)", etc. This is Lua -> C++.
Next, I also wanted to try C++ -> Lua. This could be great for binding your script to events. Also notice the "math.cos". You can use math in your scripts as well. There's even IO stream support, etc.
I believe you can actually call objects too. I haven't tried that yet, but I probably will later.
The Interface between C++ And Lua(Plus)
C++ and Lua are two different environments. How are those ever going to talk to each other? Internally Lua works with stacks (pushes, pops, should be a bit familiar, I assume). I believe that you have to take care of all the stack push/pops yourself in normal Lua.
With LuaPlus it's a bit different. It's close to normal C++. We'll put the code above to use in our C++ program.
I made a seperate .h file to test around with LuaPlus. I will paste the code here first and then explain it step by step.
#pragma once #include "LuaPlus.h" using namespace LuaPlus; /* Lua test code */ static int LS_PrintNumber(LuaState* state) { LuaStack args(state); // Verify it is a number and print it. if (args[1].IsNumber()) { Log << args[1].GetNumber(); printf("%f\n", args[1].GetNumber()); } // No return values. return 0; } void test(); void test() { //Create state LuaStateOwner state; //With this the script can access our own C++ functions: state->GetGlobals().Register("PrintNumber", LS_PrintNumber); //Open test file: int iret = state->DoFile("test.lua"); //Get a global variable: LuaObject sObj = state->GetGlobal("health"); int mytest = sObj.GetInteger(); //Update the value: sObj.AssignInteger(state, 30); //Get value again: mytest = sObj.GetInteger(); //Call a function in lua: LuaFunction<float> Add = state->GetGlobal("Add"); float myret = Add(3.14f,0.0f); }
I have the habit of simply setting breakpoints in my code, and check out the value. Hence I don't 'print' much.
At the top of my code you will see the required declarations needed for the Lua file. Then follows a custom Print function, found in one of the LuaPlus examples. Then the last "Test" function is my test setup.
States
States are individual scripts, or environments. Every seperate script runs in its own state. If you want to run four scripts, you can't do that with one state, unless you unload it every time. Loading takes time, so you would want to use multiple states. (That's how I understood it from the docs)
//Create state LuaStateOwner state; //With this the script can access our own C++ functions: state->GetGlobals().Register("PrintNumber", LS_PrintNumber);
The above code creates a state variable, and registers my earlier mentioned LS_PrintNumber function.
The name it's registered as is "PrintNumber". I do this before loading the acutal script, since PrintNumber is called immediately (not within a function) from the Lua script. As seen above in my Lua code.
//Open test file: int iret = state->DoFile("test.lua");
This code simply opens (and if needed, compiles) the lua code. If it doesn't load because of errors, it will return an error code. 0 is OK. The rest is an error. You can lookup the table with errors from the Lua site. It's like 4-5 kinds of errors. The compiler returns way more descriptive errors, hence it's always useful to see why your code doesn't compile using this command line program:
E:\LuaPlus\Bin>luaplusc "E:\src\test.lua" luaplusc: E:\src\test.lua:6: unexpected symbol near `/'
Note that there are multiple ways to run a script. You can also use DoString(string) to run a script from a string you loaded somewhere. (or generated using an interface, etc.)
//Get a global variable: LuaObject sObj = state->GetGlobal("health"); int mytest = sObj.GetInteger();
Here we get our global health variable, which we declared at the first line, remember? LuaPlus has all kinds of functions: GetInteger, GetString, GetDouble, GetFloat, etc. to get the right type of variable.
Also notice the "LuaObject" we are using to get the value. We now have a reference to the "health" variable.
Which also means we can update it:
//Update the value: sObj.AssignInteger(state, 30); //Get value again: mytest = sObj.GetInteger();
"Health" is now 30. Again, many functions: AssignFloat, AssignString, etc. The auto complete list (In VC++) that pops up when you type the '.' is quite straight forward.
//Call a function in lua: LuaFunction<float> Add = state->GetGlobal("Add"); float myret = Add(3.14f,0.0f);
This calls our "Add" function in Lua, from our C++ code. Pretty nice, first get a pointer to the function and then call the function. Notice my Add function does not do an add. It returns the cosinus, something I wanted to test.
Also notice the "float". This could be an integer, double, etc. Depending on your function in the Lua code. I think it's quite obvious.
Final words
I hope this simple introduction to Lua helps someone starting with Lua. I'm a complete starter right and are simply writing this to remember it myself after my Uni tests soon :).
|
http://content.gpwiki.org/index.php/Lua:Tutorials:Scripting_with_LuaPlus_and_Cpp
|
CC-MAIN-2014-42
|
refinedweb
| 1,472
| 77.74
|
Static Behavior
In Java, classes can have static fields and methods. A static field is effectively equivalent to a file static in C; it's a variable that has the lifespan of the program but has its visibility restricted to the compilation unit in which it is declared (or the class, in Java). A static method is effectively a namespaced function.
Objective-C has functions inherited from C. There is no direct equivalent of a static method, however. Classes in Objective-C are real objects. You can take a pointer to a class just as you would with an object and send it messages. This lets you do some quite interesting things that are difficult in Java. One example is having a dictionary mapping from strings to classes and creating a new class depending on which string you parse. This is incredibly useful, for example, when parsing XML and wanting a different parser class for each element that you might parse.
|
http://www.informit.com/articles/article.aspx?p=1568732&seqNum=5
|
CC-MAIN-2019-18
|
refinedweb
| 161
| 63.9
|
I also need help inserting a line of code that if the date validation comes back invalid, it loops back to the beginning of the program to ask the user for another date, or to press Q to quit. I have no idea how to do this, only have 4 chapters of a beginning java book for help.
import java.util.Scanner; /* Date Validation Program. This program is designed to take user input of a date in DD/MM/YYYY format and validate this date. It is to find out of this year is a leap year. It is then going to say if the date entered was valid for that year. It will then write out the full month name with the date. It will also give the date number of the year. This program will show the knowledge of creating working methods for use by a main method. Written by: Anna Javorka **/ public class DateValidation { //This section is for the main method, which will later call the written methods. public static void main(String[] args) { //Declare Variables String Date; String Day; String Month; String Year; int day = 0; int month = 0; int year = 0; //import scanner Scanner in = new Scanner(System.in); //Request User Input. System.out.print("Enter a date in MM/DD/YYY format:"); String Date = in.next(); //convert string to int for later comparison String Month = Date.substring(0, 1); int month = Integer.parseInt(Month); String Day = Date.substring(3, 4); int day = Integer.parseInt(Day); String Year = Date.substring(6, 9); int year = Integer.parseInt(Year); String LEAP = LeapYear(Date); String VAL = DateValidate(Date); { if (string VAL.equals(Invalid)) { System.out.println("Error"); //add system loop } String FULL = FullDate(Date); String NUM = DateNumber(Date); System.out.print("Input Date: " + Date); System.out.print("Full Date: " + FULL); System.out.println("Date Number: " + NUM); //Add loop back to top } /* Determines if input year is Leap year by algorithim. Returns String of LeapYear or NotLeapYear **/ public static string LeapYear(String LEAP) { Boolean isLeapYear = ((year/4 == 0) && (year/100 != 0) || (year/400 == 0)); If (isLeapYear) { Return ISLeapYear } Else { Return NOTLeapYear } } /* This Method Validate's the user's input as a real date. Month must be between 1-12. Day must be correct for it's month as well as take Leap Year into account. **/ public static string DateValidate(String VAL); { //validate the date if in a leap year if (LEAP.equals(IsLeapYear)) if (month == 1) { if ((day >=1) && (day <=31)) { Return Valid; } else { Return Invalid; } } //February has 29 days else if (month == 2) { if ((day >=1) && (day <=29)) { Return Valid; } else { Return Invalid; } }
|
http://www.dreamincode.net/forums/topic/317425-question-about-methods-and-restarting-program/page__pid__1830089__st__0
|
CC-MAIN-2016-22
|
refinedweb
| 435
| 67.04
|
2006
Filter by week:
1
2
3
4
5
SQL Server 2005 Integration Services Service Won't Start
Posted by Tom Murphy at 12/29/2006 4:40:15 PM
Has anyone seen this before? Anyone know how to fix it? I'm getting pretty desperate and have found no info about this on any of several web searches. Sql Server 2005 SP 2 is running on Windows Server 2003, SP 1. When I installed SQL Server 2005 SP 2, all installations succeeded including SS...
more >>
How to dyamicly change excel connection string
Posted by Radovan at 12/29/2006 3:39:30 PM
Hi, I have one problem. In SQL 2005 I have SSI package to export data to some excel files. I have for loop container with some job (one of the job is generate file name for exporting data and put the name into variable). My question is how I can dynamicly change Excel Connection Manager to p...
more >>
Accessing DTS properties via TSQL
Posted by BLS at 12/29/2006 12:25:36 PM
Can anyone tell me how to access DTS properties (such as global variable values) via TSQL? Examples would be nice. Thanks ...
more >>
SSIS DataFlow tasks - Scipt expression reference language
Posted by Christophe Leroquais at 12/28/2006 6:54:09 PM
Hey there! does anyone know a good site/documentation about the SSIS expression script language (e.g.: used in Derived Columns)? Thx, Chris ...
more >>
Linking Outlook fields from the "Contact Folder" to a custom form in Outlook...
Posted by Jeff at 12/24/2006 2:05:19 PM
Using Exchange Server 2003/Outlook 2003, we have created a custom Help Desk form in Outlook. We would like the ability to link specific fields from the Contacts Folder, in a drop-down list box if possible. We are far from knowing that something like this is even possible. Would appreciate *any...
more >>
Which compontent can retrive multi-tables result from a stored procedure and save as XML which format same as ADO.NET dataset.writeXML?
Posted by ABC at 12/22/2006 12:00:00 AM
Which compontent can retrive multi-tables result from a stored procedure and save as XML which format same as ADO.NET dataset.writeXML? ...
more >>
Data Link support for Data Transformation Services
Posted by jayl via SQLMonster.com at 12/21/2006 3:06:41 PM
Hi All, I am looking to use DTS to manipulate data from a number of sources (access, oracle, etc.) and am looking for some direction as to best practices and gotchas that I should worry about. Do people recommend using DTS or is there a better way of doing this type of data import/manipulatio...
more >>
Data Driven Query Problem
Posted by Kurt at 12/21/2006 9:33:30 AM
Hi, I've got a big problem with SQL Server 2000 DTS and I hope that you could help me. I'm making a package (PKG A) calling an another one (PKG B) using =E0 Data Driven Query. When I run PKG B alone it works perfectly and there is no problem, that's perfect. But when I run it from the PKG A I...
more >>
Don't see what you're looking for? Search DevelopmentNow.com.
Simple Question on DTS - Please help me ..
Posted by karthik at 12/21/2006 6:42:23 AM
Dear all, Using DTS i'm transferring 3 tables (Just select * statement ) from database A and loading the 3 tables to database B repectively . I need the tasks to be set like this: 1) There should be only one source and one target in the dts designer sheet . 2) Every time before i load the d...
more >>
Problem using Excel named ranges within a DTS package
Posted by dcrawley at 12/21/2006 3:41:13 AM
Hello I am importing data from an Excel spreadsheet into a table, via DTS. I am trying to use a named range in the spreadsheet, and have used a script from Allan Mitchell to achieve this:- ' 243 (Excel Named Range) Option Explicit Function Main() Dim sActualLocationOfData Dim Exce...
more >>
Cancel SSIS package programaticly
Posted by Ralf Hedler at 12/21/2006 1:11:00 AM
Hello, we execute packages from our windows-application. Some packages take a long time. Is it possible, to cancel packages programaticly? Would be very happy for any comment! Best regards Ralf Hedler ...
more >>
Multiple Workflow execution of Tasks
Posted by tturner6 NO[at]SPAM hotmail.com at 12/20/2006 8:40:29 AM
I have written many DTS packages to move data, I've never had to branch in my workflow of tasks. I'm creating a package which has a log table. It runs four times a day. It will check the log table, if the package has already run, stop. If it hasn't run, go on with further processing. I'v...
more >>
Data migration from Mainframe to SQL Server
Posted by cjeffwang NO[at]SPAM gmail.com at 12/19/2006 8:05:01 SQL ...
more >>
SSIS Package runs in VS but not on schedule
Posted by Svein Terje Gaup at 12/19/2006 1:44:28 AM
I have a package that transfers data from a table in SQL Server to a table with identical schema in Oracle. This package runs fine when run from Visual Studio, but it fails when I set it up with a schedule in SSMS. Error message is: Execution of job 'Eksporter til Formula' failed. See the hi...
more >>
SSIS: Oracle to SQL Server transformation
Posted by Fariba at 12/18/2006 9:08:00 PM
I have to come up with a SSIS package to tranform data from oracle database to our sql server database. This oracle db is managed by a third party , I have been given 10 views from oracle so I can extarct information I need. How should I design my SSIS package? Do I have to have 10 different ...
more >>
Dynamic setting of Destination in Data Transformation Task
Posted by Blake at 12/18/2006 3:41:51 PM
Call me stupid, but I need help. I'm converting from a Dbase5 file to SQL database. I have a "dbase5" connection for the source database and a "OLE ..SQL" connection to the destination database. Between the two connections I have a Data Transformation Task. I have a global variable called "...
more >>
SSIS Error: Failed to lock variable xx for read access - Script Task under Foreach Container
Posted by jobs at 12/15/2006 11:46:20 AM
Perhaps somebody in this news group knows. In SSIS I have a foreach container thats got an filename enumeration collection of files in a directory stored in a variable I call FileName. Inside the foreach container, I have script task (vb.net) that I want to pass that FileName variab...
more >>
SSIS Export to iSeries table
Posted by Helmut Knappe at 12/15/2006 12:00:00 AM
Hi, I am asked to help a customer writing data to a table on an iSeries system. There is no ODBC source nor target in SSIS. But he can read using the DataReader Source via ODBC. But for writing? The DataReader Target only goes to memory. And what I am supposed to do then? Isn't there...
more >>
import a text file from a remote server using A DTS package
Posted by bj at 12/15/2006 12:00:00 AM
Ive tried several ways to create a DTS package to import a text file into my SQL database from a remote web server (ip address). When i walk through the text file wizard I've tried the following paths for the connection but it keeps failing. My permissions on the webserver are ok because yo...
more >>
Create a Dynamic file name with ActiveX script
Posted by LionsDome at 12/14/2006 6:17:52 AM
Hello, I am trying to create a file name that puts in the current date and time at the time of creation. I am able to create it just fine without the date and time stamp but not with it. Can someone out there tell me what I am doing wrong? I keep getting a type mismatch error. I think the pro...
more >>
RE-opening an SSIS package
Posted by Peter Longstaff at 12/13/2006 5:36:34 AM
Hi Guys, I have just moved from SS2000 to SS20053 and I can not find SSIS packages. I have created a SSIS package and I now want to modify it. Can anyone tell me where to look for SSIS packages saved on the server? TIA Peter...
more >>
Execute SSIS Pkg From MSMQ Trigger??
Posted by gfergo NO[at]SPAM gmail.com at 12/13/2006 3:30:22 AM
Good Morning, Has anyone successfully executed an SSIS package from within an MSMQ trigger? Environment - We are running Windows 2003, .NET Framework 2.0, and Sql Server 2005. Our trigger is written in C#. It has been registered using RegAsm. It is running inside Component Services (...
more >>
Business studio is missing from installation
Posted by Gene. at 12/12/2006 8:58:00 PM
Hi I have installed evaluation version of sql 2005 and i see that business studio is missing. I tried to reinstall but got the same result. Does anybody know what could be a problem? Gene....
more >>
simple DTS question
Posted by paulhux174 NO[at]SPAM hotmail.com at 12/12/2006 8:45:58 PM
I'm exporting ( dts package) a query to a csv file, how do I get the column names in the file. I get the data but no column headings. Thanks in advance ...
more >>
How to execute ssis package from .vbs?
Posted by Flemo at 12/12/2006 6:59:00 PM
I am upgrading to SQL 2005. I have a VB script that calls a DTS package (shown below). Is there a way for a VB script to call a SSIS package? I have seen posts that mention adding a reference to Microsoft.SqlServer.ManagedDTS.dll in VB.NET and calling oApp.LoadFromSQLServer, but what about V...
more >>
Is (local) always default instance
Posted by hlop2002 NO[at]SPAM hotmail.com at 12/12/2006 12:53:57 PM
Quote from a book: Using (local) for the server name ensures that this package can be easily migrated from a development computer to a production computer without editing the DTS package, because this connection object will always attempt to connect to the default SQL Server instance on the c...
more >>
ETX Delimiter in Flat File
Posted by bigbrorpi NO[at]SPAM gmail.com at 12/11/2006 1:58:14 PM
Hi - I can't figure out how to tell the Flat File Connection Manager to use ETX (Hex 03) as the row delimeter. I tried typing {&h03}, but that didn't work. Does anybody know how to accomplish this? Thanks! B ...
more >>
Beginner question: Connecting to Oracle via SSIS
Posted by jrpfinch at 12/11/2006 2:19:25 AM
I have no experience of Oracle and have a little SQL Server knowledge. I am trying to connect to a Oracle 9i database managed by a (not very good) external vendor and hence have little knowledge of its inner workings. I have got as far as adding a new datasource in SSIS and would be grateful ...
more >>
DTSRUN Error
Posted by Usha at 12/10/2006 1:45:44 PM
Hi Hi I have a package executed by SQLSERVER agent.It was executing fine over months now.It has failed over weekend with message. Executed as user: DTSRun: Loading... DTSRun: Executing... DTSRun OnStart: DTSStep_DTSOlapProcess.Task_1 DTSRun OnError: DTSStep_DTSOlapProcess.Task_...
more >>
Exporting a DTS query as an ASCII File
Posted by Mike Chung at 12/10/2006 11:17:00 AM
I am performing a query and an ASCII export within a DTS job. The conditions have been set for the export as Vertical line (PIPE) as the delimator and having a hard return at the end of each row. How can I put a vertical line at the end of each row. I tried forcing a new column with PIPE bu...
more >>
SSIS. Dynamically populate collection in Transfer SQL Sevrer Objects Task
Posted by mEmENT0m0RI at 12/8/2006 2:16:27 PM
Hello everyone, I can't seem to be able to add a reference to Microsoft.SqlServer.Dts.Tasks.TransferSqlServerObjectsTask Namespace in Sript Task because it just doesn't show up in the available components list. Any ideas? Thank you, Igor *** Sent via Developersdex...
more >>
How to run a DTS/SSIS package from the web
Posted by KarenM at 12/8/2006 12:32:52 PM
I want to run a DTS/SSIS package from web but we do not want a full web server on the SQL box to do this but still the ability to run the DTS/SSIS package from a HTTP connection. Please advice. Thanks Karen ...
more >>
PRINT debug messages and CURSOR in stored procedure confuses DTS; "Invalid Pointer"
Posted by Bill_DBA at 12/8/2006 7:44:02 AM
I have the following stored procedure that is called from the source of a transformation in a DTS package. The first parameter turns on PRINT debug messages. The second, when equals 1, turns on the openning, fetching, and closing of a cursor. Things are fine if only one of the two parameters was...
more >>
Finding DTS package Name from encrypted name in DTSRUN command
Posted by Jon Blues at 12/8/2006 12:46:01 AM
I have a problem, I am in the process of moving jobs and packages to SQL 2005 and I have a DTS package on SQL Server 7 that has been scheduled into a job. Unfortunately I have no idea what DTS package this job is running only an encrypted package name in the DTSRUN command, unfortunately this ...
more >>
SSIS job problem
Posted by Sienna at 12/7/2006 3:17:49 PM
I wrote and deployed an SSIS package as a file system package (SQLServer 2005). I tried scheduling the package to run as a job step using SQL Agent Service Account with Windows Authentication and I have admin rights. It fails. I have tried a lot of things to get this job to run the package ...
more >>
SSIS Deploy Method file .SSISDeploymentManifest faulty
Posted by Slice at 12/7/2006 10:11:42 AM
I used the file system right-click, Deploy menu choice on the file ..\bin\deployment\<prjName>.SSISDeploymentManifest and 4 of the 13 .dtsx files were not published into the MSDB database although the log file generated says otherwise. No configuration files were involved; authored .dtsx in fi...
more >>
Copy data from 2005 to 2000
Posted by Will at 12/7/2006 8:51:19 AM
We have a production database on a SQL 2005 server. I need a copy of the data and the schema (if possible) to be copied to a SQL 2000 server we use for development and testing. I asked my SQL DBA to set up a DTS Job to do this every day at 4am, but my SQL DBA says you can't do this (incompatib...
more >>
importing space delimited text file
Posted by jesbuddy07 at 12/6/2006 11:04:01 AM
i've been looking on this subject in this group with no luck. i've been trying to import a text file which has multiple space delimited column to a SQL server 2000. i can't seem to find any answer on this subject. i tried typing space on the delimiter field, but i get "error source: microso...
more >>
Data conversion failed while converting column "officename" (70) to column "officename" (250).
Posted by IanO at 12/6/2006 7:39:57 AM
Please tell me how I can GLOBALY change the default error handling to just keep going. It is far more important that I get the good rows then stop on one bad one. This error is reported from the Data Flow Control Flat File Source. For some time, I have been changing fail on error to ignore ...
more >>
assigning source file at runtime
Posted by san at 12/6/2006 2:00:07 AM
hi, i have made a package in SQL SERVER 2000 and then execute it from vb.net the source file is .txt file and the destination is sql table. now the source file's path is to be given at runtime thru vb.net code. i used the following code: Dim dtsImport1 As New DTS.Package2 ...
more >>
Can't export to fixed-format text on workstation
Posted by JXStern at 12/5/2006 10:51:48 PM
It has something to do with my workstation config, same package builds and runs fine on other boxes. Breaks when I try to build it on my workstation. Simple connection to database, text file destination, straight-thru transform. Problem is the destination for some reason lists no fields, and...
more >>
Global Variable value cannot update in ActiveX Script Task
Posted by too.changmian NO[at]SPAM gmail.com at 12/5/2006 6:51:40 PM
Hi all, I want to call stored procedure from my DTS, after execute the SQL , i want to pass the global variable to ActiveX Script. However, the global variable in ActiveX Script only hold the default value Here is my stored procedure: CREATE PROCEDURE sp_test123 @TotalCountSum int OUTPUT...
more >>
In what table will I find my packages?
Posted by IanO at 12/5/2006 8:01:47 AM
When I connect to Intergration Services, expand msdb folders, I can see my packages. However when use the database as in: use msdb go Select top 11 * from dbo.sysdtspackagefolders90 Select top 11 * from dbo.sysdtspackages I can see the folders from the first query but cannot see...
more >>
Inserting 20 Million Rows to Oracle
Posted by TM at 12/4/2006 2:39:04 PM
Here is a good question for you senior DBAs. You have to DTS 20 million rows nightly from SQL 2000 to Oracle 9.2.0. How would you do it? Here is what I did and some timings. 20 million rows. SQL Server DTS using ODBC or OLE DB [took hours] ..csv to Oracle using ODBC [took hours] SQL ...
more >>
Looping through datasource using Dynamic Properties task
Posted by alex.barvenov NO[at]SPAM gmail.com at 12/4/2006 2:37:59 PM
Hi, I have read hard through both SQLDTS.com and this forum but still can't make it work together :). My problem is trivial. I need to transfer data from a number of identical MS Access mdb databases into an MS SQL Server db via DTS. I need to open the first mdb, transfer data, go to the s...
more >>
"File Connection Manager" for package logging
Posted by IanO at 12/4/2006 1:09:07 PM
When I enabled logging on a package, I was asked to give a unc path and filename. My options were (and still are) Create File Existing file Create Folder Existing Folder Ocassionally, I clean out the log files directory. So I option I look for is: If the file is there, append to it; if no...
more >>
DTS bulk insert error
Posted by Kalyan at 12/4/2006 12:16:00 PM
Hi I am trying to import a table from SQL Server 2000 (it got about 40 Millions rows) to SQL Server 2005 table, I am getting this error after transforming for about 1 hr. I used simple data flow task with default settings. Any help [SQL Server Destination [283]] Error: An OLE DB err...
more >>
Dynamic File Definition for Source Files
Posted by Jen Espana at 12/4/2006 7:30:00 AM
Hi all, I am hoping that someone out there might be able to help me with a requirement we have from our client. We want to use SSIS to import our client's data into their database. The issue is that the source file's column definitions may change. (For example, new columns may be added ...
more >>
copy sql database problems
Posted by chanix yahoo at 12/3/2006 6:03:55 PM
hi! i'm trying to copy a database from one server to another. but still problems persist. here are the details of my servers. Server A (source) OS - Windows 2000 Advanced Server SQL - SQL 2000 Enterprise Edition, Service Pack 4 Server B (destination) OS - Windows 2003 Server Enterprise ...
more >>
SSIS "The connection has been disabled" error
Posted by rhjDBA at 12/2/2006 3:36:00 PM
I created a custom script source component to pull data from a dirty pervasive 2000i DB service pack4. Heres the problem my component will pull 425 rows of data but it will do this only if I set my deffaultBufferMaxRows property to a small number like 17. It will not pull more than 425 it h...
more >>
·
·
groups
Questions? Comments? Contact the
d
n
|
http://www.developmentnow.com/g/103_2006_12_0_0_0/sql-server-dts.htm
|
crawl-001
|
refinedweb
| 3,518
| 72.97
|
Hello, I want to define a new loss function. Specifically, with the L1 norm applied on the weight of FC layer, my aim is to learn sparse feature. The code is following:
params = model.state_dict()
p = params['module.fc.weight']
loss = nn.CrossEntropyLoss()(output, target_var) + lambda*L1Loss(p)
optimizer.zero_grad()
loss.backward()
optimizer.step()
here is my loss function
def L1loss(weight)
loss = torch,norm(weight, p=1)
return loss
Obviously, it does work. The L1 loss dosen't decrease, but the CrossEntropyLoss seems well.
So, How to combine them? or my definition on L1 loss is wrong?
did you try to increase the lambda parameter for L1Loss? I cant think of any other reason why it wont decrease
|
https://discuss.pytorch.org/t/how-to-create-a-new-loss-function-on-weights-of-fc-weight/3108
|
CC-MAIN-2017-47
|
refinedweb
| 119
| 71.21
|
This is the third and final part of the “How to create a Ruby Gem series” where we are going to finalize our plugin and publish it to RubyGems.
The following topics will be covered:
- Reading and processing a ZIP archive using the rubyzip module.
- Testing ZIP processing.
- Uploading/downloading files via Lokalise API.
- Using VCR to record HTTP interactions and replay them on subsequent test runs.
- Publishing your gem to the RubyGems platform.
- Importer class
- Exporter class
- Publishing to RubyGems
- Conclusion
Below are the previous parts of the series:
Importer class
Import method
First of all, let’s finalize our
import! method in the
lib/lokalise_rails/task_definition/importer.rb file. This method should download translation files from Lokalise and store them in our Rails application.
def import! unless proceed_when_safe_mode? $stdout.print 'Task cancelled!' return false end open_and_process_zip download_files['bundle_url'] $stdout.print 'Task complete!' true end
This method is going to check whether or not the locales directory is empty. If it is not empty, we’ll ask the user whether s/he would like to proceed using the
proceed_when_safe_mode? method. This check won’t be performed if the “safe mode” option is disabled.
Next, we’ll open the URL pointing to the archive containing all the translation files. After extracting the files and placing them inside the locales folder, we’ll output a “Task complete!” message and return
true as a result.
Let’s also add the
proceed_when_safe_mode? method:
def proceed_when_safe_mode? return true unless LokaliseRails.import_safe_mode && !Dir.empty?(LokaliseRails.locales_path.to_s) $stdout.puts "The target directory #{LokaliseRails.locales_path} is not empty!" $stdout.print 'Enter Y to continue: ' answer = $stdin.gets answer.to_s.strip == 'Y' end
This is going to check whether safe mode is enabled and whether the locales directory is empty. If the answer to both is yes, then it will ask the user to confirm the import operation. Great!
Opening and reading a ZIP file
Now let’s see how to open a ZIP file containing our translations. Create a new method:
def open_and_process_zip(path) Zip::File.open_buffer(open_file_or_remote(path)) do |zip| fetch_zip_entries(zip) { |entry| process!(entry) } end end
This method accepts a path to the archive (in theory, this path may point to a local or a remote file) and then utilizes the rubyzip module to open it. Then each entry is processed one by one.
Create yet another method to read either a local or remote resource:
def open_file_or_remote(path) parsed_path = URI.parse(path) if parsed_path&.scheme&.include?('http') parsed_path.open else File.open path end end
While you could proceed without creating this method and utilize Kernel
open instead, this is not safe and not recommended for production use.
The next step is the
fetch_zip_entries method:
def fetch_zip_entries(zip) return unless block_given? zip.each do |entry| next unless proper_ext? entry.name yield entry end end
Here we are fetching files in the archive one by one and checking their extensions. If the extension is valid, then we process the file; otherwise, we just skip it.
We will need the
proper_ext? method when coding another class as well, so let’s define it for the
Base class within the
lib/lokalise_rails/task_definition/base.rb file:
require 'pathname' module LokaliseRails module TaskDefinition class Base class << self private def proper_ext?(raw_path) path = raw_path.is_a?(Pathname) ? raw_path : Pathname.new(raw_path) LokaliseRails.file_ext_regexp.match? path.extname end end end end end
This method converts the path to a
Pathname and makes sure its extension matches the regular expression stored in the
file_ext_regexp option.
Processing ZIP file entries
Finally, let’s create the
process! method inside the
importer.rb file:
def process!(zip_entry) data = YAML.safe_load zip_entry.get_input_stream.read # <====== 1 subdir, filename = subdir_and_filename_for zip_entry.name # <====== 2 full_path = "#{LokaliseRails.locales_path}/#{subdir}" FileUtils.mkdir_p full_path # <====== 3 File.open(File.join(full_path, filename), 'w+:UTF-8') do |f| # <====== 4 f.write data.to_yaml end rescue StandardError => e $stdout.puts "Error when trying to process #{zip_entry&.name}: #{e.inspect}" # <====== 5 end
This method does the following:
- Reads the file contents fetched from the archive.
- Gets the filename and its directory name from within the archive. For example, the
en.ymlfile may be stored in the
enfolder. This structure has to be preserved inside the Rails app as well, therefore we are fetching both the filename and directory.
- Creates full path to the translation file, including the locales folder, file directory, and its name. Then creates the corresponding path.
- Opens the newly created file and pastes translation data into it.
- If something goes wrong, outputs an error message.
Add yet another method to fetch the file and directory name. Place it in the
base.rb file:
def subdir_and_filename_for(entry) Pathname.new(entry).split end
This method will return a plain array with two items: directory and filename.
Downloading translations
So, we are done with processing the ZIP archive. The last step is to actually download the requested translations from Lokalise. To achieve this, create a new
download_files method inside
importer.rb:
def download_files opts = LokaliseRails.import_opts api_client.download_files LokaliseRails.project_id, opts rescue StandardError => e $stdout.puts "There was an error when trying to download files: #{e.inspect}" end
We are reading the import options, and then using the
ruby-lokalise-api client to perform the actual download.
Add the
api_client method to
base.rb:
require 'pathname' require 'ruby-lokalise-api' module LokaliseRails module TaskDefinition class Base class << self def api_client @api_client ||= ::Lokalise.client LokaliseRails.api_token end # ... end end end end
The last step is to load all the necessary modules in the
importer.rb:
require 'zip' require 'yaml' require 'open-uri' require 'fileutils'
At this point, the import feature is done and dusted!
Writing tests for the import task
To make sure everything is working well, let’s write a test for the import feature within the
spec/lib/tasks/import_task_spec.rb file (remove the old test we added in the previous part):
RSpec.describe LokaliseRails do let(:loc_path) { described_class.locales_path } let(:remote_trans) { '' } it 'import rake task downloads ZIP archive properly' do allow(LokaliseRails::TaskDefinition::Importer).to receive( :download_files ).and_return( { 'project_id' => '123.abc', 'bundle_url' => remote_trans } ) expect(import_executor).to output(/complete!/).to_stdout expect(LokaliseRails::TaskDefinition::Importer).to have_received(:download_files) expect(count_translations).to eq(4) expect_file_exist loc_path, 'en/nested/main_en.yml' expect_file_exist loc_path, 'en/nested/deep/secondary_en.yml' expect_file_exist loc_path, 'ru/main_ru.yml' end # ... end
In this test we are trying to download a sample archive and make sure four translation files are being created. However, we also need to add before and after hooks:
before do mkdir_locales rm_translation_files end after :all do rm_translation_files end
Prior to running any tests, we have to make sure the locales directory is actually created and that it is empty. Then, after all the tests are executed, we will perform a cleanup by removing all translation files.
Define some new helper methods in the
spec/support/file_manager.rb:
def expect_file_exist(path, file) file_path = File.join path, file expect(File.file?(file_path)).to be true end def locales_dir Dir["#{LokaliseRails.locales_path}/**/*"] end def mkdir_locales FileUtils.mkdir_p(LokaliseRails.locales_path) unless File.directory?(LokaliseRails.locales_path) end def rm_translation_files FileUtils.rm_rf locales_dir end def count_translations locales_dir.count { |file| File.file?(file) } end
Now you can run
rspec . and make sure your tests are running properly!
Testing a third-party API with VCR
The final thing I would like to do is to test that the archive is being downloaded properly from Lokalise. We perform the download operation via the API, and theoretically the same operation can be executed within our test. However, I would rather not send real API requests each time the tests are executed. Instead, it would be nice to record the API interaction once, store the result within a given file, and then “replay” this interaction on subsequent test runs. To achieve this, we are going to use a solution called VCR.
We have already added it to the gemspec, but it requires some additional configuration. Therefore, create a new
spec/support/vcr.rb file:
require 'vcr' VCR.configure do |c| c.ignore_hosts 'codeclimate.com' c.hook_into :faraday c.cassette_library_dir = File.join(File.dirname(__FILE__), '..', 'fixtures', 'vcr_cassettes') c.filter_sensitive_data('<LOKALISE_TOKEN>') { ENV.fetch('LOKALISE_API_TOKEN') } c.configure_rspec_metadata! end
VCR stores all HTTP interactions in special YAML files which are called cassettes (if you are 25 or older, you probably remember VCR cassettes with films like “Terminator” or “Conan the Barbarian”). However, certain data like API tokens should not be stored in the cassette, therefore we will use
filter_sensitive_data to exclude it.
Now create a new
spec/lib/lokalise_rails/task_definitions/importer_spec.rb file:
describe LokaliseRails::TaskDefinition::Importer do it 'returns a proper download URL' do project_id = 'PROJECT_ID' allow(LokaliseRails).to receive(:project_id).and_return(project_id) response = VCR.use_cassette('download_files') do described_class.download_files end expect(LokaliseRails).to have_received(:project_id) expect(response['project_id']).to eq(project_id) expect(response['bundle_url']).to include('s3-eu-west-1.amazonaws.com') end end
Make sure to replace
PROJECT_ID with a real Lokalise project ID. In this test we are using a cassette called
download_files. If said cassette does not yet exist, it will be created for you by sending a real API request. On subsequent runs, however, the recorded interaction will be utilized.
Exporter class
Export method
Now let’s take care of the
export! method in the
lib/lokalise_rails/task_definition/exporter.rb file. It should upload all translation files matching the given criteria to Lokalise:
require 'base64' # <===== 1 module LokaliseRails module TaskDefinition class Exporter < Base class << self def export! queued_processes = [] each_file do |full_path, relative_path| # <===== 2 queued_processes << api_client.upload_file( # <===== 3 LokaliseRails.project_id, opts(full_path, relative_path) ) rescue StandardError => e $stdout.puts "Error while trying to upload #{full_path}: #{e.inspect}" end $stdout.print 'Task complete!' queued_processes # <===== 4 end end end end end
- Import the
base64module which we are going to use to properly encode the translation file contents.
- Take each file from within the locales directory.
- Upload a translation file to Lokalise via the API. The upload process will take place in the background, so the API will respond with queued process data.
- Return an array with all the queued processes.
Processing translation files
Add a new
each_file method to the
exporter.rb:
def each_file return unless block_given? loc_path = LokaliseRails.locales_path Dir["#{loc_path}/**/*"].sort.each do |f| full_path = Pathname.new f next unless file_matches_criteria? full_path relative_path = full_path.relative_path_from Pathname.new(loc_path) yield full_path, relative_path end end
This method iterates over the files in the locales directory and keeps only those that match the criteria (we’ll take care of the corresponding method in a moment). Then for each file, we get its path as it relates to the locales directory, for example: if the file is named
~/my_project/config/locales/en/nested/en.yml, the relative path will be
en/nested/en.yml. We will send this relative path to Lokalise thus preserving the original file structure.
Here’s the method to check whether or not the file should be processed:
def file_matches_criteria?(full_path) full_path.file? && proper_ext?(full_path) && !LokaliseRails.skip_file_export.call(full_path) end
So, we have to make sure that:
- The entry is actually a file, not a directory.
- The entry has the proper extension (we already created the
proper_ext?method earlier).
- The file was not blacklisted using the
skip_file_exportoption. This option accepts a lambda or a procedure which should return either
trueor
falsedepending on the filename.
File upload options
To finalize the export feature, add the
opts method which should return the below file upload options:
def opts(full_p, relative_p) content = File.read full_p # <===== 1 lang_iso = YAML.safe_load(content)&.keys&.first # <===== 2 initial_opts = { data: Base64.strict_encode64(content.strip), # <===== 3 filename: relative_p, # <===== 4 lang_iso: lang_iso # <===== 5 } initial_opts.merge LokaliseRails.export_opts # <===== 6 end
- Read translation file contents.
- Try to determine the language ISO code of the given translation file.
- Encode translations using the
base64module.
- Provide the relative path to the translation file as its name.
- Set the language ISO code.
- Add any additional export options the user has provided.
Testing the exporter
Next, let’s test our exporter functionality within
spec/lib/lokalise_rails/task_definitions/exporter_spec.rb. First of all, add some
let instructions and hooks:
describe LokaliseRails::TaskDefinition::Exporter do let(:filename) { 'en.yml' } let(:path) { "#{Rails.root}/config/locales/nested/#{filename}" } let(:relative_name) { "nested/#{filename}" } before :all do add_translation_files! end after :all do rm_translation_files end end
Create two new helper methods within
spec/support/file_manager.rb:
def add_translation_files! FileUtils.mkdir_p "#{Rails.root}/config/locales/nested" File.open("#{Rails.root}/config/locales/nested/en.yml", 'w+:UTF-8') do |f| f.write en_data end end private def en_data <<~DATA en: my_key: "My value" nested: key: "Value 2" DATA end
Return to the spec file and add the following test:
it 'sends a proper API request' do allow(LokaliseRails).to receive(:project_id).and_return('PROJECT_ID') process = VCR.use_cassette('upload_files') do described_class.export! end.first expect(process.project_id).to eq(LokaliseRails.project_id) expect(process.status).to eq('queued') end
Don’t forget to replace
PROJECT_ID with a real Lokalise project ID. Now you can run
rspec . and observe the results!
Publishing to RubyGems
So, my congratulations to you: our Ruby Gem is now ready to be deployed to RubyGems! Before doing this, you might want to add some more tests to increase test coverage; all specs for the
lokalise_rails gem can be found at GitHub. Also, I would recommend pushing all the changes to your GitHub repo and making sure that the TravisCI tests are all green as well. Finally, run Rubocop and fix the issues found.
Then, navigate to rubygems.org and sign up (it’s free). Next, inside your command line interface, make sure you have the up-to-date RubyGems software and Bundler:
gem update --system gem install bundler
Build your gem using gemspec:
gem build lokalise_rails.gemspec
The above command is going to create a new file with a
*.gem extension named after your gem and its version. The final step is to actually publish your gem:
gem push lokalise_rails.VERSION.gem
Make sure to publish the
*.gem file, not the gemspec! You will be asked to log in via RubyGems, and then after a few seconds your new gem will be available!
Test the installation by running:
cd .. gem install lokalise_rails
Also you can view your gem’s information by visiting rubygems.org/gems/lokalise_rails.
Great job!
Conclusion
We have reached the end of both this article and the whole series. We have discussed how to create a Ruby Gem from scratch, add all the necessary configurations, how to introduce a testing suite, and lastly how to publish it to RubyGems. As a final note: don’t be shy about creating a new gem, even if it seems too simple. The open source world is all about contributions from different developers from around the globe, and even if your plugin can help ten people, that’s still a great achievement!
So, that’s all for today, folks. Thank you for staying with me to the end and see you really soon!
Return to the second part
|
https://lokalise.com/blog/how-to-create-a-ruby-gem-publishing/
|
CC-MAIN-2022-05
|
refinedweb
| 2,483
| 52.05
|
Jason R. Coombs <jar...@jaraco.com> added the comment: In [this question](), I encounter another case where a decorator would be useful. Without the decorator:
def is_docker(): path = '/proc/self/cgroup' return ( os.path.exists('/.dockerenv') or os.path.isfile(path) and any('docker' in line for line in open(path)) ) With the decorator: @suppress(FileNotFoundError) def is_docker(): return ( os.path.exists('/.dockerenv') or any('docker' in line for line in open('/proc/self/cgroup')) ) The decorator enables several improvements: - The boolean expression is now two expressions joined by 'or', which is semantically easier to parse and thus less prone to error than the three joined by and/or. - There's no longer a need to create a path variable and reference it twice, allowing the value to appear inline where it's most relevant. - The code is one line shorter. - The body of the function is two lines shorter. - The key behaviors the function is seeking to achieve are prominently presented. Acknowledged there are two caveats: - It's unclear the exception really is only expected in the 'open' call. - In the case where the exception is suppressed, the function will return None, which while resolving to boolean False, isn't False. Those caveats could be addressed, but will sacrifice readability or conciseness. I don't think this use-case warrants re-opening the ticket or revisiting the issue, but I wanted to share for consideration. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe:
|
https://www.mail-archive.com/python-bugs-list@python.org/msg332540.html
|
CC-MAIN-2018-22
|
refinedweb
| 251
| 56.76
|
C the type changes. For example, this program:
#include <iostream> int main() { int x = -3; unsigned y = x; std::cout << y << "\n"; }
prints 4294967293. Like unsigned integer wraparound (and unlike signed integer overflow) these changes of value are not undefined behavior, but they may be unintentional and may also be bugs. As of recently, Clang contains support for dynamically detecting value changes and either providing a diagnostic or else terminating the program. Some fine-grained flags are available for controlling these diagnostics, but you can enable all of them (plus some others, such as signed overflow and unsigned wraparound checks) using -fsanitize=integer.
Now we get:
$ clang++ implicit2.cpp -fsanitize=integer $ ./a.out implicit2.cpp:5:16: runtime error: implicit conversion from type 'int' of value -3 (32-bit, signed) to type 'unsigned int' changed the value to 4294967293 (32-bit, unsigned) 4294967293 $
Here’s a truncation example:
int main() { int x = 300; unsigned char c = x; }
It gives:
$ clang++ implicit3.cpp -fsanitize=integer $ ./a.out implicit3.cpp:3:21: runtime error: implicit conversion from type 'int' of value 300 (32-bit, signed) to type 'unsigned char' changed the value to 44 (8-bit, unsigned) $
To suppress the diagnostic, we can make the conversion explicit using a cast:
int main() { int x = 300; unsigned char c = (unsigned char)x; }
Different parts of this functionality landed in Clang before and after the 7.0.0 release — to get everything, you will want to build a Clang+LLVM from later than 1 Nov 2018 (or else wait for the 8.0.0 release in a few months).
How would you use these checks? Generally, they should be part of a testing campaign, perhaps in support of a code audit. If you enable them on a non-trivial code base, you will run into diagnostics that do not correspond to bugs, just because real C and C++ programs mix integer types so freely. I suggest starting with -fsanitize=implicit-integer-truncation. Let’s look at doing this to Clang+LLVM itself and then compiling a hello world program: here’s the output.
I’d be happy to hear about any interesting bugs located using these checks, if anyone wants to share.
Finally, see this tweet by Roman Lebedev (the author of these checks) showing that the runtime impact of -fsanitize-trap=implicit-conversion is very low.
These problems are also caught by GCC when using options -Wconversion and -Wsign-conversion.
Note that w/ uniform initialization we will obtain a diagnostic since narrowing conversions are ill-formed modified version of your examples:
int x = -3;
unsigned y{x};
int x2 = 300;
unsigned char c{x2};
and from the obligatory godbolt:
we obtain a diagnostic for both cases.
See this Stackoverflow answer that I wrote for explanation as to why gcc and clang treat narrowing conversions differently wrt to making them warnings or errors:
Lars, both compilers can emit conservative warnings about this, but that is very different from reporting actual changes in value during execution.
Shafik, thanks! Also, I hate that syntax.
The relationship between the choice of unsigned/signed types with the size of int and the presence or absence of defined wrapping behavior should be viewed as a historical accident. Consider the effect of “x *= x;” when x holds the largest value of its type, for various types. Because code written for system with different sizes of “int” in the days before the Standard would have different expectations as to which values behave as signed or unsigned, and the Standard wanted to minimize impact on such code, the Standard ends up mandating that some systems process the difference between two uint16_t values as an unsigned type, and that other systems process it as a signed type.
Adding some new fixed-sized types whose semantics would be independent of the size of “int” would allow implementations to tailor their optimizations and diagnostics far more usefully than is possible with the existing types. If a struct member is supposed to represent a number of things from 0-65535, an attempt to add one when it holds 65535 should be considered a mistake which should be reported by a diagnostic build. If a struct member is supposed to represent a 16-bit checksum, an attempt to add one when it holds 0xFFFF should be considered normal. If the first purpose were served by a “16-bit whole number” type and the latter by a “16-bit wrapping algebraic ring” type, a compiler could easily and usefully generate a diagnostic in the first case but not the second. If the same “uint16_t” type is used in both cases, however, it would seem difficult to ensure that diagnostics from the erroneous actions get noticed without someone having to wade through lots of useless diagnostics from valid actions.
Am I the only one who’s concerned that some of those runtime errors may actually be evidence of bugs?
|
https://blog.regehr.org/archives/1633
|
CC-MAIN-2021-43
|
refinedweb
| 820
| 53.85
|
I want to send a union from Arduino to a python script running on a pc, and I want to parse the received union. I have read lots of topics online but I am stuck on how to do it in the most efficient way. Specifically, the union contains a struct of a string containing a sensor name, and a 32 bit integer containing a sensor value. I use the union as I understood is one of the best ways to deal with binary struct data as the ones I want to send.
Here my Arduino code
int32_t analog_input0 = 0; // Variables to send data struct serial_message { char sensor_name[20]; uint32_t sensor_value; }; union serial_message_union { serial_message ser_msg; uint8_t serial_message_bytes[sizeof(ser_msg)]; }; union serial_message_union ser_msg_u; void setup() { Serial.begin(115200); } void loop() { analog_input0 = analogRead(0); strcpy(ser_msg_u.ser_msg.sensor_name, "analog_input0"); ser_msg_u.ser_msg.sensor_value = 100; Serial.write(ser_msg_u.serial_message_bytes, sizeof(ser_msg_u.ser_msg)); }
Here my python code:
import serial ser = serial.Serial('/dev/tty.usbmodem1225061', 115200, timeout = 2) while True: if(ser.in_waiting > 24): #I set 24 because the message is composed by 20 char + 4 bytes for the uint32_t value incoming_data = ser.read(24) print('Incoming data: "{}" '.format(incoming_data)) #Here I want to parse the incoming data, where the first 20 bytes are the sensor name and the last 4 bytes are the sensor value
Can anyone suggest how can I proceed? Do you see any error/bad software practice so far in my code?
|
https://forum.arduino.cc/t/sending-a-union-from-arduino-via-serial-communication-and-parsing-it-in-python/593360
|
CC-MAIN-2021-31
|
refinedweb
| 241
| 55.95
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.