text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
jxyShockinglyGreen Posts20 Joined Last visited About jxy Recent Profile Visitors The recent visitors block is disabled and is not being shown to other users. jxy's Achievements Rare Rare Rare Recent Badges 7 Reputation - I have a paid membership of brilliant.org. Not sure if you can view this content. Let me know if you want a shared account. - Done! The text isn't hard at all. I replaced <div> with <text> and put them in the <svg>. Thanks! - Hi @OSUblake, I successfully incorporated your solution into my project! Quick question, how to scale this animated SVG so that it fits in containers of all sizes? - This looks amazing! I will study this later! Thank you! - Hi @OSUblake, here is the CodeSanbox link: You have been really helpful! Thank you! I just upgraded my account to "shockingly green" because of helpful people like you haha - Hi all, I tried to set up a Codepen for this but I'm so green that I haven't figured out how exactly, so bear with me... The letters "a", "b", "c" are in a separate div overlapping the svg element. My questions is, how would you keep the letters "a", "b", "c" in between their respective angles as you drag the handles, as in the clip below? Any pointers will be appreciated... thanks! - Hi Cassie, sorry let me take that back. 😅 As a noob, I find everything related to web development overwhelming, not just limited to learning about GSAP. But I do sometimes found the site a bit hard to browse. For example, when I was eating out just now, I took out my phone to log on greensock.com to check out the draggable plugin page. Initially the page didn't load for a long 10 seconds, showing me a page with header and footer but empty content, before it fetches content from server, which then caused what I believe is called Cumulative Layout Shift. When the page finally loads, it looks like this on my chrome/iphone: Granted, this is not a big issue at all! And I feel soooo grateful for something like GSAP exists to make animation sooo much easier! Ideally though I want the documentation to be like By the way, love your personal website! The design! All the animations are delightful! Especially your avatar animation! I have always been wanting to ask, how did you make the" pens flowing out like water" effect?😃 - - I feel the Greensock website is by design not too beginner/user friendly to keep the barriers to entry high haha😜 - Love your videos! - Just let you know that I successfully implemented your suggestion - triangle as a clipPath to cover the rest of the circles! Thank you @OSUblake !! - Thanks! - Thanks, Cassie! Let me give this a look! - Thanks! So there isn't a more intuitive way to update the angle shapes (which are svg <path> elements ) other than vanilla javascript? Is there a better tool / plugin to deal with such things? How to drag to change the shape of SVG paths? jxy posted a topic in GSAPI have been trying to achieve something like this: Where one can drag the "handles"(the blue and yellow circles) to change the shape of the triangle. My understanding is that I would need to somehow change the "d" attribute of the "path" to make this work. The closest draggable plugin example I could find is this: But it's changing the "points" attributes of a polygon, not the "d" attributes. I also looked at morphSVG plugin but it seemed to be used for animating the morphing from one shape to another very different shape. Any pointers, please? Thank you! Couldn't import "Draggable" plugin into a Next.js 12 project jxy replied to jxy's topic in GSAPI fixed this! You are so right! Simply placing gsap.registerPlugin(Draggable); inside of useEffect hook fixed this! Thank you so much!! So far my experience with Greensock has been awesome! Something like this: import React, { useEffect } from "react"; import { gsap } from "gsap/dist/gsap"; import Draggable from "gsap/dist/Draggable"; export default function Component() { useEffect(() => { gsap.registerPlugin(Draggable); Draggable.create('#draggablediv', { }); //... }, []); return ( // ... ); }
https://greensock.com/profile/106617-jxy/
CC-MAIN-2022-40
refinedweb
693
73.98
SQL from Java From SQLZOO You can connect to an SQL engine using JDBC. You will need to obtain a JDBC driver from the appropriate database. - The example given is based on the MySQL JDBC Connector/J which may be found at MySQL - Extract the MySQL Connector/J into a directory C:\thingies. (If you know what you are doing you will put this in your classpath.) - The program connects to mysql server running on my computer - the user name and password is valid - you may use it too. - More examples at You need to compile and execute the java from the command prompt: javac CIA.java java -classpath "C:/thingies/mysql-connector-java-2.0.14;." /* CIA.java From By Andrew Cumming */ import java.sql.*; public class CIA{ public static void main(String[] args){ Connection myCon; Statement myStmt; try{ Class.forName("com.mysql.jdbc.Driver").newInstance(); // Connect to an instance of mysql with the follow details: // machine address: pc236nt.napier.ac.uk // database : gisq // user name : scott // password : tiger myCon = DriverManager.getConnection( "jdbc:mysql://pc236nt.napier.ac.uk/gisq", "scott","tiger"); myStmt = myCon.createStatement(); ResultSet result = myStmt.executeQuery( "SELECT * FROM cia WHERE population>200000000"); while (result.next()){ System.out.println(result.getString("name")); } myCon.close(); } catch (Exception sqlEx){ System.err.println(sqlEx); } } } > It should respond with the names of four countries.
http://sqlzoo.net/wiki/SQL_from_Java
CC-MAIN-2014-15
refinedweb
222
51.75
9.21.5. Free Response - CookieOrder A¶ The following is a free response question from 2010. It was question 1 on the exam. You can see all the free response questions from past exams at. Question 1. An organization raises money by selling boxes of cookies. A cookie order specifies the variety of cookie and the number of boxes ordered. The declaration of the CookieOrder class is shown below. public class CookieOrder { /** Constructs a new CookieOrder object */ public CookieOrder(String variety, int numBoxes) { /* implementation not shown */ } /** @return the variety of cookie being ordered */ public String getVariety() { /* implementation not shown */ } /** @return the number of boxes being ordered */ public int getNumBoxes() { /* implementation not shown */ } // There may be instance variables, constructors, and methods that are not shown. } The MasterOrder class maintains a list of the cookies to be purchased. The declaration of the MasterOrder class is shown below. public class MasterOrder { /** The list of all cookie orders */ private List<CookieOrder> orders; /** Constructs a new MasterOrder object */ public MasterOrder() { orders = new ArrayList<CookieOrder>(); } /** Adds theOrder to the master order. * @param theOrder the cookie order to add to the master order */ public void addOrder(CookieOrder theOrder) { orders.add(theOrder); } /** @return the sum of the number of boxes of all of the cookie orders */ public int getTotalBoxes() { /* to be implemented in part (a) */ } // There may be instance variables, constructors, and methods that are not shown. } Part a. The getTotalBoxes method computes and returns the sum of the number of boxes of all cookie orders. If there are no cookie orders in the master order, the method returns 0. 9.21.5.1. How to Solve This¶ - You will need to loop through each Cookie Order, since there are more than one. What type of loop will you use? - How will you continuously count the amount of boxes? You will need a variable to hold that data. - The method has a return type; what will you return? 9.21.5.2. The Algorithm¶ 8-18-1: The method getTotalBoxes.int sum = 0; --- for (CookieOrder co : this.orders) { --- sum += co.getNumBoxes(); --- } // end for --- return sum; --- } // end method
https://runestone.academy/runestone/static/JavaReview/ListBasics/cookieOrderA.html
CC-MAIN-2019-26
refinedweb
349
56.96
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Practice Projects and Code Fights119:06 with Kenneth Love Kenneth answers student questions while playing around with Python by doing a few practice projects and trying out some Code Fights. - 0:00 [MUSIC] - 0:05 I thought what we would do this week, we've done it before and - 0:08 people seemed to like it, I wanna do it again. - 0:11 Is just to build some learning code, - 0:18 just kind of playing around with some code. - 0:25 So I've got this one which is beginner python exercises. - 0:31 I've got Karen's list of projects, which is basically any language you want. - 0:37 And then there's also this site called CodeFights, - 0:39 I don't know how many of you have seen CodeFights or not. - 0:42 But it's pretty cool, they give you a challenge and - 0:45 you have to go write the code for it in whatever language you want. - 0:48 And then they tell you how well you're doing or not, which is cool. - 0:55 So I'll leave that up to chat, - 0:57 to pick like kind of which one of those they want for a minute or so. - 1:00 If nobody comes in then I'll just pick one, and we'll do stuff. - 1:06 But we'll talk about the code that we're gonna write. - 1:08 We'll talk about just other Python things or - 1:13 just programming things or whatever, - 1:17 we'll talk about whatever you all want. - 1:21 One thing that's really fun is today is my three year anniversary for - 1:25 Treehouse, so that's kinda cool, I started three years ago today. - 1:32 And also today is Andrew Chalkley's birthday, so - 1:37 if you are a fan of Andrew wish him a happy birthday. - 1:41 Mohammad, you want some PHP? - 1:45 I'm trying to get Alina to do it, - 1:47 there's a good chance I'm gonna get her to do it, probably next month. - 1:54 Because I'm gonna be out for a week or two due to conferences. - 1:59 So I'll try to get her to do it. - 2:04 But PHP is a fun language, it's got some cool stuff. - 2:07 I, unfortunately, haven't done it in a long time. - 2:10 So I don't feel like it would be a great idea to have me doing PHP, - 2:16 probably be a bad idea. - 2:18 All right so, Let's - 2:23 just try some practice projects over here. - 2:27 Let's just see what we can find, that sounds fun. - 2:33 So this one we have kind of a little - 2:38 guide here showing, I guess, how hard this is. - 2:43 These are all supposed to be basic beginner programs or - 2:48 challenges, not super hard stuff. - 2:53 And then Karen's Projects here, these are really, really spread out. - 2:59 So we've got graphics stuff. - 3:01 We've got threading stuff. - 3:02 We got whatever. - 3:06 So yeah, any of these that y'all would like to see? - 3:12 Or we'll do some CodeFights. - 3:16 This one's codefights.com. - 3:18 This is github and this is practicepython.org. - 3:26 Okay cool, so let's try one of these harder ones here, - 3:34 just to see how hard the hard ones are. - 3:39 Let's start off with a three. - 3:45 Let's see. Let's try out the Reverse Word Order. - 3:50 Okay, so write a program using functions - 3:55 that ask the user for a long string containing multiple words. - 3:59 Print back to the user the same string except with the words in backwards order. - 4:04 So if we type in my name is Michelle, we get back Michelle is name my. - 4:11 Okay, that's not too difficult of a thing, right? - 4:16 So let's try this. - 4:17 So we'll say, reverse_words.pi. - 4:21 This actually sounds like something I would make all of you do as a code - 4:26 challenge. - 4:27 [LAUGH] So there should be many of you that could tell me how to do this one. - 4:32 I'm gonna try and get these to where I can kind of see both things. - 4:35 So let's say, reverse_words(string). - 4:40 So the first thing we wanna do, we're taking in a bunch of strings, - 4:43 or we're taking a string with a bunch of words in it, - 4:46 we probably want to break it up, right? - 4:49 So let's do words = string.split. - 4:54 And that will break it up on the white space. - 4:57 So, tabs, spaces, new lines, all kinds of stuff like that. - 5:02 All right, and then we want to reverse that. - 5:06 So, let's return, Space separated, - 5:12 .join words in reverse order, that should do it. - 5:18 So we'll say input("Give me a sentence: "). - 5:29 And then we need to assign that to a variable. - 5:34 Sentence. - 5:36 And then let's say print(reverse_words(sentence)). - 5:42 All right, so is there anything in here - 5:47 that y'all find confusing or strange? - 5:53 If there's something in these six lines, - 5:56 that you're confused about, let me know, ask questions in chat. - 6:02 But let's go ahead and run this, and we've got down here Give me a sentence, so - 6:09 Hello to everyone in the live stream chat. - 6:15 And we get back chat stream live the in everyone to Hello, so cool it worked. - 6:22 Nice. - 6:24 Let's actually go a step further. - 6:27 Let's make it reverse the letters in the words and reverse the words. - 6:33 So instead of, we'll effectively print the entire string backwards, - 6:38 which that should be different. - 6:41 Let me check to make sure this isn't gonna just do what I think it is. - 6:47 We'll just print that. - 6:49 All right, so say Hello to everyone in the live stream chat. - 6:59 Yeah that would just do, okay so let's reverse the letters, - 7:03 leave the words in order. - 7:09 I'm going to change this too, so - 7:10 reverse_word_order, Cuz that's the name that makes more sense. - 7:17 Can I share the links? - 7:18 I will do my best. - 7:21 Chat is often weird about this for YouTube and stuff. - 7:27 So let's see what we can do. - 7:33 Let's try that and let's try this. - 7:44 And lastly, I'll pop in this one. - 7:53 Hey DarkDev, how are you doing? - 7:57 All right, so yeah, we've got the reversing word or we print the words out - 8:02 in the reverse order that they were put into the sentence. - 8:05 Now let's print out the words in the order they were in the sentence but - 8:09 each word reversed. - 8:11 Not really harder to do but, - 8:22 But still, it's a neat thing to do, it's fun. - 8:26 So we wanna do words = - 8:30 string.split, again, because we wanna break it up into a bunch of words, right? - 8:36 And then I'm gonna do this as a list comp, so - 8:41 let's return, space separated, .join. - 8:46 And then word -1 through word in words. - 8:52 Okay, so now, This - 8:57 will be reverse_letter_order. - 9:01 All right, and let's try this one again, so - 9:05 we'll say again Hello to everyone in the live stream chat. - 9:12 Sweet, so now you can see we've got the words in reverse order, but - 9:16 then we have the words in the correct order but printed out backwards, right? - 9:24 Abdul, I posted the links in the chat a minute or so ago. - 9:29 And dimar, Python is completely free, yes, - 9:32 you can go to python.org and download it, and enjoy. - 9:37 All right cool. So - 9:38 I think we've definitely solved that challenge, right? - 9:43 That's not too bad. - 9:47 Now these are definitely beginner challenges. - 9:49 These are not ones that are like, you're going to spend a month - 9:56 solving one of these things, so So that's cool. - 10:00 What's the colon colon? - 10:02 So Stern, you're talking about these two characters right here, right? - 10:08 So that's a fun thing in Python. - 10:10 In Python we have a thing called a slice, so - 10:13 let me hop into the Python console here. - 10:20 Okay, so I'm gonna make a string, all right? - 10:23 So I'm gonna call this name and it's gonna be student, all right? - 10:29 So if I want to get just one character out of there, I can use brackets, - 10:35 the bracket notation and the index number of the letter, right? - 10:41 So in this case I got s because s is - 10:44 the zeroth index is the first thing in the string. - 10:48 So if I want to get a couple of things out of here, then I can do the same thing. - 10:57 Let's actually do the next one. - 10:59 And then I can list the next one that I wanna get. - 11:03 So I wanna get from one up to three, all right. - 11:07 So if I do that I get T and E cuz this is one, this is two, and - 11:11 we don't get three, we get up to three. - 11:17 Hector, the link is in chat, just a little ways above you. - 11:20 It's called code fights, codefights.com. - 11:24 All right, so Stewn and everyone else who's wondering about the double colons. - 11:31 So, that's how I get a sub string or a sub list of things, okay? - 11:40 I can also do, like, let's say five I don't think that's quite right. - 11:47 I can also list a step. - 11:50 Okay, so that's t and u, so it got one, - 11:56 two, three, four, it would have gotten five but it couldn't find it. - 12:02 But it only took every other one. - 12:03 So it did a step of one, and then a step of two took the next one, and - 12:07 then stops, okay? - 12:11 I can leave off anyone of those that I want and it will get, - 12:15 if I leave up the first one I'll start at the very beginning. - 12:19 If I leave out the last one, it goes to the very end. - 12:22 If I leave off both of them and put in a step, - 12:26 then I will get every other item, or every third item, or whatever. - 12:32 I can put in negative steps though, and it will reverse, it will go backwards. - 12:38 So it goes, in this case from the very back to the very front, and - 12:42 in this case takes everyone, in this case we take every other one. - 12:47 In this case we'll take every third one, that kind of thing. - 12:51 So, that's what we're doing here is we're taking the entire list of words and - 12:56 reversing that entire list. - 12:59 Or in this case we're taking all the characters, - 13:02 all the letters in the word, and reversing those. - 13:05 So, yeah, that's what we're doing. - 13:07 Thanks for asking me about that. - 13:09 Darkdev, I am doing the lutem dare. - 13:17 Yeah, my friend Owen and I are gonna do it. - 13:19 [LAUGH] We're gonna use the language Rust, - 13:24 and we're gonna use the GGEasy library. - 13:29 So, we'll see how it goes. - 13:36 I don't know that will go well, but we'll see how it goes. - 13:41 We're not too worried about actually having something submitted. - 13:44 We really only worried about building something and playing around with it. - 13:48 So, if we get something built, that's cool. - 13:51 If we get something submitted, that's amazing. - 13:52 But if we don't, that's okay. - 13:57 All right, Shamul, you wanna know about Tuples. - 14:00 Okay, what do you wanna know about them? - 14:04 Ask me questions about them and I will happily answer them. - 14:08 While you're asking those, I am going to start the next one of these. - 14:11 So I'm gonna call this passwords.pi so I'm gonna try this password generator. - 14:18 And we're gonna see how we go here. - 14:24 Okay, a couple of questions first. - 14:27 Cigar asks, when should we stop with a particular programming language and - 14:32 move forward to some other language? - 14:33 Python is my first language. - 14:34 You don't have to. - 14:35 You never have to move on. - 14:40 This isn't completely universal, the statement isn't completely true. - 14:43 But every programming language is capable, right? - 14:46 You can pretty much do anything you wanna do in pretty much any language, right. - 14:52 You might have a really hard time making a high performance video game in PHP or - 15:00 in I don't know, Manalua actually good for video games. - 15:06 Anyway, right. - 15:07 Some languages are better for some things than others. - 15:10 If you're building desktop applications, Python's not particularly great, - 15:13 it's okay, but it's not amazing. - 15:15 So you don't ever have to move on so long as the work you're doing - 15:20 works really well in the language that you like to use. - 15:25 So just stick with it. - 15:28 But it never hurts to know a couple of languages. - 15:30 So that said, learn some other languages. - 15:35 Marion asks, is migrating from PHP to Python difficult? - 15:44 Yes and no? - 15:45 I basically made that migration. - 15:49 Python is a lot more strict about how you do certain things and - 15:56 there's a lot less typing in Python than there is in PHP, like actual typing out - 16:01 letters for a lot of things, not everything, but for a lot of things. - 16:07 So some of those things are a little difficult, right? - 16:11 Like having to obey those rules can be really hard for - 16:14 some people when they go from PHP to Python, or any language to Python. - 16:18 Python's very, you need to do these things in this way. - 16:22 But it's not as hard as other languages, like say, Rust or Haskell, or whatever. - 16:27 Anyway, so the migration's not the easiest thing but it's not super hard. - 16:34 So Muhammad you want to know where it can be helpful, where what can be helpful? - 16:40 I'm gonna scroll up a bit through the chat and see if I see. - 16:49 Yeah, you said you want to know where that. - 16:52 I want to know that where it can be helpful. - 16:54 Can you tell me what the thing is that you wanna know where it's helpful? - 16:59 And then Dimer has asked, PHP or C#, is it better? - 17:03 It's really gonna depend on what you're doing. - 17:08 Yeah, just depends on what you're doing. - 17:10 Ryan, what are classes in Python, can I show an example? - 17:15 I sure can. - 17:17 And DarkDev, as far as the theme, you don't have to follow the theme. - 17:22 The theme is just there to prevent the empty page syndrome. - 17:25 I'm sure you've all experienced this, right? - 17:28 You have to write a story for class or something or you have to write a program, - 17:33 or you have to do whatever and you're presented with this page right here and - 17:38 you are just like I don't know what to put into that page, right. - 17:44 That can be really challenging. - 17:46 So to prevent that the lutem dare people made it to where they gave you a theme. - 17:51 And you can follow the theme if you want, you don't have to follow the theme but - 17:54 the theme helps you to prevent that empty page thing because - 17:58 okay I gotta build something that has to do with forest, okay great. - 18:02 Okay, so, now if I do the question about python we've heard Ryan - 18:07 wants to know about classes. - 18:10 I bet we can use a class for this password generator. - 18:12 And Shmuel doesn't understand the difference between dictionaries, - 18:16 and tuples. - 18:18 Okay, so we might be able to use both of those too. - 18:22 So let's see what the problem is first. - 18:26 Write a password generator in Python. - 18:29 Be creative with how you generate passwords. - 18:30 Strong passwords have a mix of lowercase letters, uppercase letters, number and - 18:34 symbols. - 18:35 The passwords should be random, - 18:37 generating a new password every time the user asks for a new password. - 18:41 Include your run-time code in a main method. - 18:44 That should be function not method. - 18:47 Extra, ask the user how strong they want their password to be. - 18:50 For weak passwords pick a word or two from a list. - 18:53 All right, cool. - 18:54 So, lets do this. - 18:56 Lets import string and let's also import random. - 19:02 So, Python provides us with a lot of libraries, right? - 19:06 And two of those libraries random and string are really useful, because they - 19:12 contain modules that we can use to build other things or methods we can use rather. - 19:18 Random let's us do random things. - 19:21 Strings has a lot of stuff about like ASCII characters. - 19:24 So, let's do password generator. - 19:28 All right, so this is a class, guess what? - 19:31 That's a class, that's it. - 19:34 So, there's a class. - 19:38 I've made a class. - 19:39 But I probably need a bit more information about this, right? - 19:44 So, let's have - 19:49 a thing here where we do, let's say - 19:55 strength, and we'll start with 16. - 19:59 Okay, so self.strength = min(). - 20:16 How do I wanna do this? - 20:19 I don't want them to be anything lower than 16. - 20:26 Max, that's what I want, so 16 or strength, okay? - 20:33 So this way, they have to give me something that is at least 16, right? - 20:39 If they give us something above that, then that's cool, we'll use whatever that is. - 20:44 If they don't give us something above that, we're gonna at least use strength. - 20:49 And then, we want to do - 20:54 self.password = - 20:58 self.generatepassword. - 21:04 We don't even need to get a pass on the strength, - 21:06 that's gonna be stored on the instance. - 21:09 And that's it, and then, we'll do def __str__ return self.password. - 21:15 All right, so we have a class right now, and - 21:19 its initialization method has an argument that comes in called strength, - 21:23 which is always at least 16 due to this. - 21:28 And then, we're gonna call this method called generate password, - 21:31 which I haven't written yet. - 21:33 And if we turn our PasswordGenerator instance into a string, it's gonna return - 21:37 self.password whatever the string is that we've generated for the password. - 21:43 So let's talk about how we generate this password. - 21:48 Okay, so that has to take self because all instance methods take - 21:53 the instance as the first argument, typically, that's called self. - 21:58 All right, so I'm gonna use the strength to be how long the password is, okay? - 22:06 So let's look at string, - 22:11 where is string? - 22:15 String, string, string, string, string, string, string. - 22:17 It's down here somewhere, there it is, the module string. - 22:20 All right, so you can see here, we have, very handy. - 22:25 We have printable, we have digits, - 22:30 we have ascii_letters, okay? - 22:34 So we want those three things, - 22:39 so let's just say, pool = - 22:44 string.ascii_letters + string. - 22:50 It wasn't not numbers? - 22:51 digits + string.punctuation, okay, so - 22:56 that's our pool of characters, right? - 23:01 So we take our letters, our digits, - 23:08 our punctuation and we're gonna cram those altogether into one big string. - 23:12 And then, I wanna pull random things out of there. - 23:15 So let's go look at the random module. - 23:21 There's random, random has a really handy, - 23:27 to me at least, function in it that is known as choices. - 23:34 Cuz I don't necessary want shuffle, we'll look at shuffle later. - 23:37 I don't necessarily want sample because sample is always unique, and - 23:41 we do maybe want things to repeat. - 23:43 So we're gonna use choices, - 23:45 choices is going to pull out random choices up to a number that we specify. - 23:51 So let's just do password - 23:56 = random.choices(pool, - 24:02 k=self.strength). - 24:07 So that's going to pull out a random number of choices based on the strength - 24:11 that we've passed in. - 24:13 So we're basically making a string that is 16 or 32, or whatever characters long. - 24:19 Then, I wanna take this a step further and I want to shuffle it, all right? - 24:24 And so that will randomly shuffle them around, so cool? - 24:30 So we do random.shuffle(password), so - 24:33 it will randomly shuffle all those things around that it randomly picked out. - 24:40 And then, is there anything else that I wanna do? - 24:50 I don't think there is, to shuffle an immutable sequence and - 24:54 return a new shuffled list, - 25:03 We'll have to see what this does, I have a feeling this may error out. - 25:07 Okay, so I think we're good enough for now, and - 25:11 then, we're gonna do a return password. - 25:15 So down here, let's do - 25:21 a if __name__ == '__main__': - 25:31 print(PasswordGenerator(input).("How strong - 25:39 should your password be?") ) ). - 25:45 All right, that's good enough for that, okay, so we're gonna run this. - 25:48 How long should your password be? - 25:50 Let's test this, I want something that's 10 characters long. - 25:55 This should automatically give me back 16, right? - 25:58 I should always get 16, I should never get anything below that. - 26:06 That's right, - 26:11 let's do this, - 26:16 strength = d2ti, - 26:22 there we go, - 26:26 okay, so try - 26:37 while True while strength - 27:07 Yeah, that's what I want. - 27:11 Isnumeric I'm not a big fan of this right here, - 27:21 but it'll work for now. - 27:23 else: - 27:29 print(PasswordGenerator(int(strength))), all right, - 27:37 let's try that one. - 27:40 Should be 10, returned non-string list type. - 27:56 There we go, all right, and let's see, 1, 2, 3, 4, 5, 6, 7, 8, - 28:02 9, 10, 11, 12, 13, 14, 15, 16, 16, sweet. - 28:08 So I got a password that is at least 16 characters long, - 28:13 and it's a randomly generated password. - 28:17 So that's cool, let's try this with something longer. - 28:19 Let's try 32 as a much longer password. - 28:23 So neat, so it gave me a random password, these would be fairly safe to use. - 28:30 Hopefully, [LAUGH] no websites would reject it if you use those, - 28:36 but yeah, so there's the password. - 28:38 So here's an example using random, using string, and using a class. - 28:45 I wanna look back through chat cuz I know I've seen some other questions come in. - 28:49 And blah, blah, blah, blah, blah, okay, dictionary and - 28:54 tuples, tuples, we'll go play with in a second, Shimul. - 28:58 DarkDev, we're not gonna talk about the theme, we're not gonna worry about that. - 29:03 David wants to know if I can program in Java, no? - 29:09 I can read it, and I get a pretty good idea of what's going on in it. - 29:12 I've been taking some of our Java content at Treehouse lately. - 29:17 So push came to shove, - 29:19 I could probably write just a little bit of Java, but no, Java's not my language. - 29:25 It's a neat language, it's got some cool features. - 29:28 But it's not one that I have any real confidence in myself in just yet. - 29:35 Abdu says, they've just finished their first language, Python. - 29:40 Congratulations on finishing a whole language. - 29:42 And I wanna learn a framework to make a web application, - 29:46 should I learn Bottle, Django, or Flask. - 29:48 I would say, you probably wanna learn Django simply because it has all the bells - 29:52 and whistles and bits and pieces that you typically need for - 29:54 building a web application. - 29:56 Bottle and Flask are gonna require you to bring those in yourself - 30:01 Sometimes that's great, sometimes that's not. - 30:06 So I think at the first step, you kinda wanna just have everything there with you - 30:10 and you know that it's all gonna work. - 30:12 Sagara asked, what is Python used the most for in companies? - 30:17 It's used for a lot of things. - 30:19 Facebook uses it for URL routing and service routing. - 30:22 Instagram uses it for a good chunk of there stack or used to, - 30:26 I'm pretty sure they still do. - 30:28 YouTube uses it for a good chunk of their stack, - 30:30 again used to pretty sure they still do. - 30:33 It's used for data science a lot, so calculating where customers - 30:37 are abandoning shopping carts, and all that kind of stuff. - 30:42 It's used for a lot of things. - 30:44 Nikos, this, This right here is - 30:51 an application called Zeal, Z-E-A-L, and it is an offline documentation viewer. - 30:57 If you're on Mac, there's one just like this that's really handy called Dash. - 31:02 So yeah, check those out. - 31:04 Chad, glad I could help you with OOP. - 31:06 Blah, blah, blah, blah, [SOUND]. - 31:11 Sagara, if name equals main takes advantage of Python's name spaces. - 31:17 So name here is the current namespace. - 31:22 If I'm inside of a class or whatever, that class is now that namespace. - 31:27 So in here this says, if the script is being run directly, then do this stuff. - 31:33 If it's being imported somewhere else, don't do that. - 31:36 Humanoid, how is Windows going? - 31:38 Windows is fine. - 31:40 I had kind of an annoying thing earlier, trying to get SDL set up on Windows, - 31:45 but it turns out I was just doing it wrong. - 31:49 It wasn't actually Windows fault, it was my fault for - 31:52 not realizing I didn't have certain things installed and put in the right place. - 31:57 So once I got those fixed, it all worked fine. - 32:01 We'll take a look at some rust later, you all. - 32:04 I'll do my best to explain a little bit of it, and we'll look at it. - 32:08 Martin, what languages do I mainly know? - 32:12 Full web like HTML, CSS, JavaScript, Python. - 32:19 I can read and sorta kind of write, cuz I'm super rusty PHP. - 32:22 I got a bit of Java, I got a bit of Rust, - 32:25 I got a bit of Lua, I got bit of kind of whatever. - 32:33 And Marion, yeah, Python's very friendly to machine learning. - 32:38 So okay, cool. - 32:39 I think we got that one. - 32:39 I don't know that there's anything else here I wanna cover, - 32:42 except maybe we'll do the birthday dictionaries here. - 32:48 Just to help the person that was asking about - 32:54 understanding tuples and dictionaries. - 32:57 Humanoid, do I want a code fight? - 33:01 Not at the moment, but maybe in a bit. - 33:06 So let's get down here and let's do birth. - 33:12 Birthdays.py, so the difference between tuples, which looks like this. - 33:20 And dictionaries, which look like this. - 33:24 Let's start by creating our birthdays here. - 33:29 And we're gonna put in the birthdays of people's names, or - 33:33 we're gonna have keys of people's names. - 33:35 These are gonna be our birthdays. - 33:37 So it looks like we're gonna have these three people. - 33:42 So let's just, Paste those in there. - 33:50 And the cool thing about dictionaries is, - 33:56 you can have your keys as strings, right? - 34:01 They can also be integers and they can also be tuples. - 34:04 They can be anything that is non-mutable, that's immutable, so - 34:07 it mean they can't be change. - 34:13 Which means that you can do a lot of nice things with them - 34:20 So let's pull out, let's actually find these people's birthdays. - 34:26 So Albert Einstein, which I realized I just completely misspelled his name, - 34:32 but Wikipedia's smart enough, they will find it for me. - 34:37 Thanks Wikipedia for being smart. - 34:38 So I'm actually gonna do these as dates. - 34:45 So let's do datetime.date, and the year for this is 1879. - 34:52 The month for that would be 3, and the day for that would be 14. - 34:59 If I remember correctly, Python's date time does not zero index, - 35:05 months, or days, unlike some people, JavaScript. - 35:09 [LAUGH] So all right, let's also get Benjamin Franklin. - 35:25 All right, so he was born 1706, 1, and 17. - 35:33 And then lady Lovelace, - 35:38 when were you born? - 35:42 [LAUGH] Not Amazon, wow. - 35:48 Lovelace, I'll see you around DarkDev, have a good day. - 35:54 And yeah, Humanoid, [INAUDIBLE] are pretty much the best thing ever. - 35:59 Okay, so Lady Lovelace was born - 36:03 1815, 12, 10, cool. - 36:11 So let's go back to this prompt. - 36:13 So what where supposed to do is print out, I've got a bunch of birthdays. - 36:20 Abdu, should you be a professional in Python to learn Jango? - 36:23 No, you can learn Jango before you're a professional. - 36:28 I don't know if there really is a, yeah, there's no requirement. - 36:33 So just learn stuff when you wanna learn stuff. - 36:38 Okay, so let's print, welcome to the birthday dictionary. - 36:47 We know the birthdays of, and - 36:52 then here we're going to do, for name in birthdays, print name. - 36:59 So when you loop over a dictionary, when you do a for - 37:02 loop on a dictionary, what you're actually looping over is the keys. - 37:05 Because the keys are the primary part of a dictionary, they're the part that's the, - 37:10 not to say most useful, but they're the most static part of a dictionary. - 37:14 They're the part that Python does its best to store right upfront somewhere nice, and - 37:18 handy easy to get to. - 37:19 So when you loop over it, that's what you get. - 37:23 So in this case we're gonna print out the names, and - 37:25 then we're supposed to do an input here. - 37:28 Let's do name equals. - 37:31 And then inside here, whose birthday do you want to look up? - 37:44 And then let's say, they're gonna do that name. - 37:50 And then, we'll do print birthdays name, okay? - 37:59 So let's go ahead and run this, - 38:04 and I wanna look up Ada Lovelace's birthday. - 38:12 And let's do a .trim on that. - 38:21 Does trim not exist? - 38:29 No, it's strip. - 38:31 What am I doing? - 38:35 All right, so that should print out that. - 38:37 But I have a feeling that this is going to look a little weird. - 38:40 So yeah, so that's not necessarily the format that I'm used to, right? - 38:48 That's not necessarily the way I wanna read it. - 38:52 So I'm an American, so I typically want to do month, day, year, which is weird. - 38:59 But let's go ahead and do it that way. - 39:03 STRF time, we wanna format this. - 39:05 And how do we want to do this? - 39:09 Let's go down here and - 39:13 look at the list. - 39:16 So I want the month first, so let's actually do this as the month's full name. - 39:24 So %B, that's not a percent. - 39:26 That's not a percent either. - 39:34 Percent, capital B, and then the day of the month is %d. - 39:41 And then the year is %y. - 39:45 All right, let's try this again. - 39:48 Ada Lovelace, December 10, 1815. - 39:52 So cool, that's nice. - 39:54 That's neat. - 39:55 All right, so that's pretty simple. - 39:57 I think we've pretty much exhausted these. - 40:01 These basic challenges. - 40:03 So we're gonna try something a little harder. - 40:05 Humanoid, what music do I listen to? - 40:08 The music in the stream is Chris Zabriskie, - 40:10 he is a Treehouse employee and he makes pretty awesome music. - 40:16 As for me personally, It's somewhat of a interesting mix maybe. - 40:24 There's a lot of like, 90s, early 2000s - 40:29 punk hard core, and rock. - 40:35 And then like 1950s and 60s country, and - 40:39 then just other random stuff as it comes up. - 40:42 Zeeshon, what do I think of TensorFlow? - 40:44 I haven't gotten to play with it. - 40:46 But it seems pretty cool. - 40:48 So I don't, I don't have any problems with it. - 40:53 I think it's cool, but I haven't gotten too far through it. - 40:58 Peter, this is not a django project. - 41:00 We are just doing some different code challenges, and - 41:02 we're just talking about programming. - 41:05 So, today's kinda more of a chill - 41:09 relaxed stream than before. - 41:13 Abdou, can I explain super function? - 41:16 Sure, I can explain super. - 41:22 So, let's pretend that we have, Yeah, let's just do this. - 41:31 Let's do a, - 41:39 BadPasswordGenerator. - 41:44 And this inherits from PasswordGenerator, okay? - 41:50 So right now this class is fine, and this class just does all it's own stuff, right? - 41:56 So, well not all of it's own stuff, sorry. - 41:58 It does whatever PasswordGenerator does. - 42:00 But I want to override some of this. - 42:03 So I'm gonna take self, and I'm gonna take any args and kwargs that come in. - 42:10 But I don't care about them, right? - 42:13 What I'm gonna do is I'm gonna do super().__init__. - 42:18 And strength is gonna be equal to, - 42:26 Actually I'm not gonna pass anything in, okay? - 42:30 So now we have the same password as we had before, okay? - 42:34 But I'm ignoring any kwargs that come in, - 42:36 I'm not listening to you on whatever you're gonna feed me. - 42:39 You can say, I want this to be 82 characters. - 42:42 I don't care, I'm ignoring that and I'm calling super and - 42:46 I'm just letting it do whatever. - 42:48 So then I'm gonna say self.strength - 42:54 = 8 and self.password - 42:59 = self.generate_password, okay? - 43:08 So now down here instead of thatm let's do BadPasswordGenerator. - 43:13 We're gonna run this, I want it to be 32 characters long, - 43:16 no matter what I get an 8 one. - 43:19 So the super calls the same function, or - 43:24 same method usually, from the parent class. - 43:27 So in this case, this calls init from here, it causes this init to be run. - 43:35 And then we could go on and do whatever we want. - 43:38 So super just goes to the super class and runs that method, - 43:42 or runs the method that you specify. - 43:45 All right there's no music in the stream, there should be. - 43:52 It's just very, very quiet right now. - 43:55 So, I don't know. - 43:58 All right, let's try one of these code fights. - 44:00 I'm not sure how the code fight goes against somebody. - 44:06 Humanoid, if you want to tell me how to do that one. - 44:09 But let's just try whatever the day's challenge is. - 44:18 So what do we have? - 44:19 We have algorithmic and we have database. - 44:24 All right, let's just set for, I don't wanna deal with database stuff. - 44:41 I don't know what I really wanna do. - 44:43 Too many of these. - 44:45 The longestConsecutive one's kinda neat but I don't wanna deal with binary stuff. - 44:51 Let's look at the strings one. - 44:54 Given an array of equal length strings, check if it's possible to rearrange - 44:58 the strings in such a way that after the rearrangement - 45:01 the strings at consecutive positions would differ by exactly one character. - 45:07 For InputArray, "aba", "bbb", "bab", the output should be false. - 45:13 Okay, for InputArray "ab", "bb", "aa", the output should be true. - 45:17 Because the strings can be rearranged in the following way, "aa", "ab", "bb". - 45:25 Okay, that, Seems all right. - 45:35 So we get back a whole bunch of strings. - 45:41 What's the, I'm just gonna look up distance. - 45:48 I don't think that's gonna give me what I want, but - 45:53 it's like the Levenshtein distance or something like that. - 46:01 Yeah, that not gonna get me what I want. - 46:05 Levenstein, I know I misspell that but Dr. - 46:09 Google know what I want, yeah. - 46:13 Okay, so there's a package that does it, that's cool. - 46:15 I don't really want a package cuz I can't install one. - 46:19 I just want the actual thing. - 46:24 You did it in C, you didn't even do it in python. - 46:35 Wait just a second and let me look at the package. - 46:38 So we have that external package, yeah. - 46:48 We don't have any entry points, okay. - 46:54 So if I remember correctly this is the thing that we want. - 46:57 Okay, we want the Levenshtein-Distance of two strings. - 47:00 All right, so let's try building this real quick first, and - 47:04 then we'll see what we can get here. - 47:07 Option that's it, there we go, how do you spell that, S-H. - 47:14 Levenshtein.py, all right. - 47:18 So this is Python 2. - 47:25 We're gonna have to change this so that it's Python 3. - 47:31 But, that's okay. - 47:33 Okay, so we don't wanna print. - 47:41 We don't wanna print like that and we don't wanna print like that. - 47:46 And this should be input and input. - 47:54 And that should be like that. - 48:03 Okay, so let's try this out. - 48:05 So if I do aa and ab, - 48:09 'range' object does not support item assignment. - 48:13 That's true. - 48:14 It doesn't. - 48:21 Where did we try to do item assignment? - 48:24 Class range does not define that, okay. - 48:26 So matrix, at this point, This is - 48:31 why you don't use really bad little variable names, kids. - 48:37 Okay, I'm not even gonna worry about that. - 48:42 So we get the length of each of the two strings, okay? - 48:47 And then we do range, length of the first 1 + 1. - 48:56 Okay, * link to the second one + 1. - 49:01 And then for each thing that's in range, - 49:05 link of the second one + 1, we do matrix, - 49:10 whatever that thing is = range. - 49:17 Where we're at plus the length of the first 1 + 1. - 49:24 Okay, so then for zz and range length of the second one, - 49:29 for sz and range length of the first one, - 49:33 if, Okay, if the first string plus the place - 49:39 that we're on is equal to the second string plus the place that where at, - 49:47 Then matrix[zz +1], I see. - 49:55 I need to turn this into a list. - 50:25 Okay, so we're doing the min out of each of those, and - 50:29 then down here, we need the same thing. - 50:34 So that equals this list here. - 50:40 And this list here, And this list here. - 51:03 Okay, let's try this one again, aa, - 51:08 ab, and our distance is four, okay. - 51:14 So what about aa, bb? - 51:18 Our distance is four. - 51:19 Maybe, this isn't gonna do what I want it to do. - 51:25 [LAUGH] What was that other example that they had? - 51:28 aba and bbb, - 51:38 Yeah, cuz that's got more than one character. - 51:41 So that's six, so maybe, if the distance - 51:56 Okay, so aa and bb, that was also a four. - 52:03 All right, so somebody asked if the C programming language - 52:08 able to create an online game like an MMORPG or - 52:12 edit someone else's online game, could you create an MMORPG? - 52:18 Yes. - 52:21 Could you edit someone else's MMORPG? - 52:23 Probably not. - 52:28 Okay, Humanoid, I'm gonna not worry about this one. - 52:32 How do I see the challenge? - 52:35 This one, I'm guessing. - 52:41 Okay, get ready. - 52:43 You assume I know what I'm doing, CodeFights, Look at all your stars, man. - 52:49 You got four stars. - 52:52 All right, now I'm not promising anything, - 52:53 I'm not promising I'm gonna do better than you. - 52:55 I'm not promising I'm gonna do worse than you, either. - 52:58 Okay, given an array of integers, find the pair of adjacent integers - 53:03 based on elements that has the largest product and return that product. - 53:09 So, for input array 3, 6, -2, -5, 7, 3, the output should be 21. - 53:16 7 and 3 produce the largest product, okay? - 53:24 Okay. - 53:25 So let's start that. - 53:29 Come on, you all, this is not how you format code. - 53:33 Okay so, - 53:36 [MUSIC] - 53:40 First equals that and that. - 53:41 [MUSIC] - 53:44 Was this your code? - 53:46 Or is this. - 53:50 No, this is just some code that they gave. - 53:52 Okay. - 53:56 Best equals none. - 53:59 Yeah, best equals none for x, y and z. - 54:06 Input array. - 54:07 Input array. - 54:08 1: if X times Y - 54:13 greater than thest. - 54:19 Actually, if best is none or, - 54:28 So Y equals that, best equals X times Y. - 54:36 Return Best. - 54:51 Expected output got 18, expected 21. - 55:08 You think I can only change one line? - 55:12 Let me reset the code. - 55:16 Okay so they're doing best= InputArray[0] * InputArray[1], cur = best. - 55:23 For I in range (1 to the len(InputArray)-1). - 55:28 Cur= InputArray[i] * InputArray[i + 1]. - 55:32 If best < cur, cur = best. - 55:38 Return best, and running that one, Fails, it fails all of the tests, right? - 55:46 Two out of eight. - 55:54 See, I don't like the way they're doing this. - 56:00 Okay, I'm probably not gonna beat you, as I'm having to code on the spot on camera. - 56:06 So, [LAUGH], okay. - 56:22 Yeah, this is a silly way of doing it like this is fine, right? - 56:24 Like that's fine whatever, do that. - 56:31 But for XY in zip, - 56:35 input array input array - 56:40 1:, Right? - 56:45 Because that way you're never comparing two numbers to themselves. - 56:51 So, cur is equal to x times y, if this is less than that, okay. - 56:58 Run your sample tests. - 57:01 Number of lines changed in the sample code. - 57:05 I'm gonna look over here at the rules. - 57:08 There's one bug. - 57:11 I'm just looking for a bug, I didn't know I was just looking for a bug. - 57:19 All right, best is equal to that times that. - 57:26 I wanna arrange that, - 57:39 This should be best equals cur. - 57:46 Yeah. - 57:49 No, Humanoid it is cool, it's not what I was expecting, right? - 57:54 So, yeah. - 57:56 [LAUGHING] All right, now I get it. - 58:01 Now I see the rules. - 58:02 Okay, cool. - 58:04 [LAUGHING] Let's go over the next one. - 58:11 Okay. - 58:11 Given an array of integers, find a product of its elements., - 58:16 So I've got an array full of numbers, get back to the total products. - 58:22 Rules on this one. - 58:23 Recovery task. - 58:24 The highlighted area of the code is missing, understand and recover it. - 58:28 Note that multiple lines of code might be missing. - 58:30 Okay, so I'm gonna get an array, and - 58:34 then I need to return the total product of that array. - 58:41 Okay? - 58:43 Should be easy enough. - 58:45 Can I start? - 58:48 Do I just wait for the timer to run out? - 58:56 I guess I got to wait for the time to run out. - 58:57 All right, let's see what's going on in chat. - 59:01 And if anybody else wants to challenge me I'll probably have time to do another one - 59:06 after this, so feel free to go sign up on Code Fights and fight me on something. - 59:12 We have 17-year-old from Egypt. - 59:13 That's cool. - 59:14 How you doing? - 59:15 You've got seals in Visual Basic. - 59:16 That's a good starting language. - 59:18 And Peter, or Petter? - 59:23 Tell me which one of those it is. - 59:25 I'm glad that you're enjoying the series. - 59:26 That's cool. - 59:28 All right, so let's start. - 59:30 What is up, why can't I? - 59:35 I'm gonna refresh. - 59:37 And now watch I'm gonna have to do the first fight again. - 59:42 [BLANK_AUD [SOUND] Look what you've done, Humanoid. - 59:50 Look at this. - 59:51 Now we're all just watching a spinner. - 59:54 Humanoid [LAUGH]. - 59:58 No, it's not your fault at all, it's CodeFights being weird, I guess. - 1:00:03 Privacy Badger, yeah, you can block user voice, I don't care about user voice. - 1:00:11 I have no code friends online, no. - 1:00:15 Okay, so it's figured out that we're on Round 2. - 1:00:19 All right, cool, hey, there we go. - 1:00:22 Okay, - 1:00:37 Seriously? - 1:00:45 No, I know what this is supposed to be, this is supposed to be InputArray[0]. - 1:00:52 Yeah, CodeFights is really slow, double t the segment, so it's Petter and not Peter. - 1:01:00 Okay, well Petter, I'm happy to have you here. - 1:01:05 What can you learn when you're 17 years old? - 1:01:07 You can learn anything you want. - 1:01:09 I mean, yeah go learn Python, go learn PHP, go learn JavaScript, - 1:01:14 go learn C#, learn whatever. - 1:01:20 How are ya'll doing? - 1:01:21 [LAUGH] I guess we're gonna just wait here. - 1:01:27 Can I list the extension that I'm using in chrome, which one? - 1:01:31 I have a lot of extensions in chrome. - 1:01:34 If you can tell me which one you want to know. - 1:01:39 Yeah, you're not a beginner, you've been doing VB for years. - 1:01:43 So yeah, go build pretty much anything you want, right? - 1:01:49 Use VB to build apps or whatever. - 1:01:55 Sweet, so I passed that one. - 1:01:56 All right, let's go to the next one. - 1:01:58 All right, - 1:01:59 the algorithm should return the smallest non-negative integer of N digit's length. - 1:02:04 So, if we want an integer that is three digits long, - 1:02:09 then we return the smallest one that is at least that. - 1:02:13 Okay, I don't care about solutions. - 1:02:17 CodeWriting, so I'm just writing straight code in this one. - 1:02:20 Okay, Let's start coding, - 1:02:26 all right, so what do we want here? - 1:02:31 We want length = - 1:02:42 length = n? - 1:02:46 I don't even need that, okay. - 1:02:49 So what I want is the smallest number, - 1:02:55 so let's see, base =, - 1:03:05 10 to the power of n. - 1:03:12 That's not right, cuz if I do that and - 1:03:16 I get 10 to the power of 1, I get 10. - 1:03:26 And I don't want 1, All right, let's just do this. - 1:03:37 Return min, (map[int. - 1:03:48 Filter [lambda x len(x) == n])), - 1:03:57 I'm going to break this up a little bit. - 1:03:59 Because y'all won't ever be able to read this, - 1:04:05 and I won't remember what I was doing [LAUGH]. - 1:04:12 Okay, Okay, - 1:04:19 so lamba [SOUND] and map(str, - 1:04:23 [MUSIC] - 1:04:35 Hold on I'm not thinking this through at all. - 1:04:40 [MUSIC] - 1:04:43 I'm thinking it's gonna feed me numbers, okay. - 1:04:46 If n == 1, return 1, else: return 10 to the power of n. - 1:04:53 [MUSIC] - 1:05:05 My favorite Chrome extension. - 1:05:07 [LAUGH] My favorite one is probably privacy badger, or - 1:05:11 uBlock Origin, because it keeps me from having to see annoying things. - 1:05:18 Also HTTPS everywhere, those are great, I use those a lot. - 1:05:25 Peter you are just catching up that cool, yeah, - 1:05:28 we've been doing this about six months now so there is quite a bit to watch, - 1:05:32 hopefully Hopefully you will like all the stuff that you watch. - 1:05:37 Zander any good sites to practice Lua? - 1:05:39 So I don't know anything for practicing Lua specifically, but - 1:05:42 there is this getup.com/karen/projects. - 1:05:47 These are language agnostic challenges So - 1:05:52 you can go and do them in any language that you wanna do. - 1:05:56 So Java, LUA, Python, PHP, - 1:05:58 whatever cuz you're just doing them on your own computer. - 1:06:02 So you can kinda do whatever it is you wanna do but - 1:06:05 it's a really good test there. - 1:06:15 That should be n- 1. - 1:06:42 All right, how did we do? - 1:06:45 You beat me. - 1:06:47 Well, good for you. - 1:06:49 Good job. - 1:06:50 Yeah, that was, those were good challenges. - 1:06:54 Like I said, you're probably gonna beat me because I'm doing this live on camera. - 1:06:58 And I didn't know what the challenges were of course. - 1:07:02 So, but that's cool. - 1:07:04 Code Fights is a really neat thing, I I need to play with it more and - 1:07:11 I need to do more coding on this. - 1:07:19 I am wondering if some of this time should be stripped because of just how - 1:07:24 freakishly long it took to submit those things, right like - 1:07:29 Though some of those things took like minutes to submit. - 1:07:33 Maybe that's because I'm streaming, that could very much be part of it because I'm - 1:07:37 pushing out, you know, video to everybody. - 1:07:40 But, yeah that's pretty hard core. - 1:07:45 But yeah great job Humanoid. - 1:07:47 So Everybody give Humanoid some congratulations, some thumbs up. - 1:07:52 So, Sca Kraft, cuz I can't pronounce your Arabic name there, sorry. - 1:07:58 You said you want to make 3D games. - 1:08:00 You're probably gonna wanna learn something other than VB To make 3D games. - 1:08:07 Those two words sound a lot alike. - 1:08:09 Java is an okay choice, but you're probably best off really with c++ or c#. - 1:08:16 If you do c++ then you can build stuff with Unreal Engine, - 1:08:22 c# you can build stuff with You can build stuff with Unity. - 1:08:29 Those are probably your best two choices for - 1:08:34 doing 3-D things. - 1:08:37 All right. - 1:08:39 Do you all want to look at rust? - 1:08:40 You want me to show you. - 1:08:43 some rust that I've done, - 1:08:44 by playing totally from tutorials, totally from other stuff. - 1:08:48 But, I can show you some rust, because rust is kind of cool. - 1:08:53 It's neat. - 1:08:58 Marion, thanks, I'm glad that I look friendly for my age. - 1:09:04 I won't tell you how old I am or am not. - 1:09:08 But yeah I try to be friendly. - 1:09:16 I'm glad that came through. - 1:09:19 So rust is kind of cool because rust A, this is rust. - 1:09:26 Let me make this a little bigger. - 1:09:27 I know that you all have a harder time seeing it on - 1:09:32 the screen because of it being smaller. - 1:09:38 So let's go to 18 for that. - 1:09:39 That should be big enough. - 1:09:42 All right, cool. - 1:09:44 So Rust is cool. - 1:09:45 Rust is a langauge that looks a lot like, - 1:09:48 it doesn't really look like Java but it kinda looks like Java. - 1:09:53 it doesn't look like C, but it kinda looks like C. - 1:09:57 And it doesn't look like Python, but it works a lot like Python. - 1:10:00 A lot of it is very similar to me as a Python, - 1:10:04 and having done Ruby in the past, very similar to that. - 1:10:08 So you always have this main.rs file in your project, - 1:10:12 and then inside that there's always this function called main. - 1:10:17 Now Rust is a very functional programming language, - 1:10:22 it's meant to do functional stuff. - 1:10:26 And so yeah, you'll find a lot of functional - 1:10:31 programming oriented stuff in Rust. - 1:10:36 So this is FizzBuzz, so I'm sure you've all seen FizzBuzz at some point. - 1:10:40 The idea is you take all the numbers from 1 to 100, - 1:10:42 if the number can be divided by 3 you print out fizz, - 1:10:46 if it can be divided by 5 you print out buzz, if you can divide it by both 3 and - 1:10:52 5 you print out fizzbuzz, otherwise you just print out the number. - 1:10:58 So this is a for loop, as you can tell by the use of the word for. - 1:11:05 [LAUGH] And so this gives us, if I was doing this in Python it would be for - 1:11:11 num in range(1, 101), right, that's what I'd write in Python. - 1:11:17 So this is for num in range 1..101, very similar. - 1:11:25 And so inside of there if the number divides by 3 and - 1:11:28 the output of that is 0 then I check to see if it'll also divide by 5 and if so - 1:11:32 print out fizzbuzz, otherwise I'm gonna print out fizz. - 1:11:37 If it divides by 5 evenly we print out buzz, otherwise we print out the number. - 1:11:41 Any time you see this exclamation mark like that that's a macro, - 1:11:45 you're not exactly running a function but you're kinda sorta running a function. - 1:11:53 This is an area I don't know yet so ignore me on a lot of this, but it's pretty cool. - 1:11:59 This is pretty simple to read I think, and if we run it you'll see - 1:12:04 that it's very fast, it runs very quickly so that's cool, that's fun. - 1:12:10 Let's look at something a little bigger that I did with the help of a tutorial. - 1:12:16 Because I don't know all of this - 1:12:21 myself yet, so we'll find out. - 1:12:27 tska-craft I cannot teach you C++ and Treehouse won't teach you C++, but - 1:12:32 we will teach you C# and we can teach you Unity, so we have both of those. - 1:12:37 And am I the person who appears in Treehouse advertisements, - 1:12:43 I'm probably in some of them but - 1:12:46 I'm not in all of them I don't think, [LAUGH] I don't know. - 1:12:52 Ryan, what is the best language for framework for back-end webdev, - 1:12:55 give you an answer straight up. - 1:12:58 I can't give you an answer straight up because there's no answer to that, - 1:13:04 there's no best language, there has never ever been a best language, - 1:13:07 there will never ever be a best language, right. - 1:13:09 It depends on the programmer, it depends on the system, - 1:13:14 it depends on the requirements of the software, there's no such thing as a best. - 1:13:20 There's no best text editor, there's no best IDE, there's no best web browser, - 1:13:23 there's no best language, there's no best anything. - 1:13:29 Yeah, okay. - 1:13:30 Shamul, you wanna learn more about Python, - 1:13:34 well I heavily recommend Treehouse for that. - 1:13:37 [LAUGH] But if you don't wanna pay for Treehouse or - 1:13:41 you can't afford Treehouse, which is totally fine and - 1:13:46 understandable, then there is pretty good tutorials on python.org. - 1:13:52 And there's a couple of amazing books out there, - 1:13:56 Automate The Boring Stuff by Al Sweigart is free online I do believe. - 1:14:01 Let's look that up real quick, Automate The Boring Stuff. - 1:14:06 And then Python Crash Course or Crash Course Python, - 1:14:12 I don't remember which one of those it is, is a really great book as well and - 1:14:17 I highly, highly recommend both of those. - 1:14:21 There you go, try that out. - 1:14:24 So yeah, let's talk about this Rust script real quick and - 1:14:27 then we'll go do something else. - 1:14:29 Somebody else challenged me to a Code Fights fight, - 1:14:36 that was fun to do so yeah, somebody come fight me on this. - 1:14:42 My username is KennethLove, I'm really hard to pick out, you'll never find me. - 1:14:49 All right cool, so this is a number guessing game, right, so let's run this. - 1:14:58 So it's a number between 1 and 100 so let's guess 50, - 1:15:01 that's too small so let's guess 75, that's too big. - 1:15:06 So it's between 50 and 75, let's go for 65, that's too big. - 1:15:12 So it's between 50 and 65, let's go for 60, too big, 55, - 1:15:18 too small, 57, too small, 58, then it's gotta be 59. - 1:15:23 Sweet, and it took me eight tries. - 1:15:28 So some of this is my own thing that I added, - 1:15:31 which is the how many guesses you spent, the rest of it is from the tutorial. - 1:15:37 So we're bringing in this random crate. - 1:15:41 Sure, Humanoid, I'll do another one against you. - 1:15:44 And then kind of like with Python, kind of like with Java, - 1:15:47 we have to bring in packages to be able to use them. - 1:15:51 So in this case we're bringing in the IO package from the standard library. - 1:15:54 We're bringing in the ordering package from the comparison package - 1:15:58 inside the standard library. - 1:15:59 They're actually not called packages, they're called crates, - 1:16:04 Rust has crates which is weird but whatever. - 1:16:08 And then here we're bringing in the Rng thing, - 1:16:13 whatever that is, from the rand crate. - 1:16:17 And then inside of our main function we print out guess the number, - 1:16:22 then we generate a random number between 1 and 101. - 1:16:26 And then here I'm creating a variable called num guesses, that's gonna be zero. - 1:16:32 And this right here, this M-U-T, the mut, - 1:16:36 makes it mutable, so that makes it to where you can change the value later. - 1:16:40 Because by default in Rust all variables are immutable, - 1:16:43 they can never be changed once they're set. - 1:16:45 And that's actually really cool, that's really, - 1:16:48 really handy, so yeah, that's amazing. - 1:16:53 So then this loop here is just an infinite loop, it runs forever, - 1:16:58 this is the equivalent of Python's while true, right. - 1:17:02 So that's what you've got. - 1:17:04 All right Humanoid, I'll fight you in a second. - 1:17:10 So then we print out put in your guess, and then here we take guess and - 1:17:15 it's gonna be this new string. - 1:17:17 This one we mark as being mutable because we wanna change the string - 1:17:20 once they input one. - 1:17:22 So we take standard n, - 1:17:26 we read the line, and then here we're saying give me a reference to guess. - 1:17:32 Let me play with guess and if it doesn't - 1:17:38 come back as something that's actually a string for some reason - 1:17:43 then this expect will come out, or then this will be printed out. - 1:17:49 So then guess, we're gonna turn that into an unsigned 32 bit number, - 1:17:54 we're gonna trim it of any white space and then we're gonna parse it, - 1:17:57 that's what's gonna turn it into that u32. - 1:18:00 If it parses okay, - 1:18:03 if it's fine, then we just give back the num, give back the number. - 1:18:08 And otherwise if it doesn't then we continue with the loop, - 1:18:11 we do the loop again, okay. - 1:18:14 And then here I increment the number of num guesses by 1 and - 1:18:19 we print out here's what you guessed. - 1:18:21 And then here this is pretty cool, we do a match comparison, so - 1:18:25 we take guess, we do a comparison on it against the secret number that got picked. - 1:18:31 If it's less then we say hey, you're number was too small. - 1:18:35 If it's bigger, if it's greater, than we print out your number's too big. - 1:18:40 Or if they're equal then we print out you win, - 1:18:42 here's how many tries it took you, and then we break the loop. - 1:18:46 This is actually pretty cool, this is really, - 1:18:50 that's 40 lines of code and that's pretty amazing. - 1:18:55 We did a number guessing game almost exactly like this, - 1:18:58 in Python Basics I think it was, one of my courses, - 1:19:02 it took us way more than 40 lines to do it, so that's pretty cool. - 1:19:07 I think Rust is neat, so - 1:19:11 yeah, hopefully ya'll will, you'll check out Rust. - 1:19:15 They have a tutorial on the official site that's really good so try that out. - 1:19:19 Okay we got a new challenge, so Humanoid wants to challenge me again. - 1:19:25 Segar says there's a Vimium Chrome extension, - 1:19:30 I may have to check that out, I may have to give that a try. - 1:19:32 Tska-craft wants to know how they can work for - 1:19:36 Treehouse if they become good in one of the languages that we use. - 1:19:40 We hire every once in a while, it's always possible, but - 1:19:45 $1,000 per hour that's probably not gonna happen. - 1:19:51 Just because, yeah, nobody's gonna get paid a $1,000 an hour. - 1:19:58 $1,000 and hour if you wanna work a really small number of hours. - 1:20:01 Sure, all right, humanoid, let's do this again. - 1:20:05 I'm gonna take a minute off for - 1:20:07 everyone of these maybe two just because of me reading it. - 1:20:11 Okay given three integers, a, b, and c. - 1:20:14 It's guaranteed that two of these Integers are equal to each other. - 1:20:16 What is the value of the third Integer? - 1:20:21 Bug fix task, aw, man. - 1:20:24 Okay, extra number, if a equal to b returns c, if c not is, - 1:20:38 That's just silly, all right, next. - 1:20:42 Given a permutation, produce its inverse permutation. - 1:20:45 Permutation one, three, four, two, the output should be one, four, two, three. - 1:20:55 [MUSIC] - 1:21:04 Okay, resulting those blank for i in range that result that append that for - 1:21:10 i in range, the length of permutation result permutation i - 1:21:15 minus 1 equals Result. - 1:21:26 Okay, so that became 1. - 1:21:28 2 became 3, [NOISE] this is one I'm not sure of, - 1:21:36 because I'm not sure what - 1:21:41 they're trying to do here. - 1:21:45 List index out of range, yeah, of course it is, because, - 1:21:58 No, this one doesn't count, this one's impossible, - 1:22:00 I've had it before I never mentioned this. - 1:22:02 Okay, I get the rules, I don't get the reverse permutation. - 1:22:10 Let's go look at, well that's not gonna help me. - 1:22:15 An inverse permutation is a permutation in which each number and - 1:22:19 the number of the place which it occupies are exchanged. - 1:22:25 Each number and the number of the place that it occupies - 1:22:34 So 1 stayed in 1, 3, 0, - 1:22:38 1, 2, 3 went to there, - 1:22:42 4 went to there, 2 went to there? - 1:22:49 But 4 is not in the right place, shouldn't 4 be, - 1:23:03 And in first permutation, is a permutation in which each number and - 1:23:08 the number of the place it occupies are exchanged. - 1:23:22 Okay, I wanna see what this test look like. - 1:23:24 Okay, that's that one, 1, 2, 3 check him out as 1, 2, 3, - 1:23:29 because 1 stays in 1, 2 stays in 2, 3 stays in 3. - 1:23:34 Here, 1 stays in 1. - 1:23:42 Okay, do any of you all get this one? - 1:23:43 [LAUGH] Cuz I am lost on this one. - 1:23:48 Okay, let's just look at what the code is doing, so we have a blank list. - 1:23:53 So then, for each number in a range the link of the permutation, - 1:23:57 we're appending 0, okay? - 1:24:01 So then, for an index, for the length of the permutation, - 1:24:07 we take the permutation that number minus 1. - 1:24:14 So, That's gonna be 0 or whatever. - 1:24:23 So that first one's gonna be negative 1 and that should equal, - 1:24:37 I don't think that's right, I can only change dot, dot, dot, confirm. - 1:24:53 You know what they want, but you don't know how to do it? - 1:24:55 Well, you're doing better than me on that one then. - 1:25:01 So 1 stays in 1, The number and - 1:25:07 the number of the place which it occupies. - 1:25:14 [MUSIC] - 1:25:23 Okay, okay, okay hold on. - 1:25:25 So this is gonna take permutation i minus 1, right? - 1:25:30 So that's gonna be permutation i, the first one is gonna be 1. - 1:25:37 Minus 1 gonna be 0, so that should be equal - 1:25:43 to something, let's not worry about that one. - 1:25:48 The next one that comes through is gonna be 3, 3 minus 1 equals 2. - 1:26:02 3 minus 1 equals 2, so that's the second spot, - 1:26:08 [MUSIC] - 1:26:21 [LAUGH] Yeah, this one, the number, so if we went to the end, the inverse is to, so, - 1:26:27 yeah, the number goes where the item and the array at the index that is the number. - 1:26:33 The number goes where the item and - 1:26:37 index array at the index that is the number. - 1:26:41 But there's no 0 in here, so - 1:26:45 [LAUGH] one goes into 1, 4 goes into 2. - 1:26:54 I can't make four cases Skycode, - 1:26:56 I can't change the code beyond what I'm going right now. - 1:27:01 Okay, I'm gonna look at this, I'm gonna get this one. - 1:27:09 That one passed, cuz I basically sorted the thing. - 1:27:13 Okay, so they're expecting 4 to come into the second one. - 1:27:16 So we've got 3 which is at index 2 b Three ends up at index three. - 1:27:24 The number goes where the item in the index at the index that is the number. - 1:27:35 Yeah, how is this not just permutation[i]? - 1:27:41 Like that can't be right, - 1:27:48 yeah, 2, 4, 1, - 1:27:53 3, yeah, whatever - 1:28:13 Okay, I'm just gonna skip this one, - 1:28:15 I don't even care at this point [LAUGH] we'll figure it out later. - 1:28:19 Okay, given a string replace each, you skipped it too. - 1:28:23 [LAUGH] Given a string replace each of its characters by - 1:28:27 the next one in the English alphabet. - 1:28:30 0 would be replaced by a, so - 1:28:36 we're shifting it by 1, - 1:28:41 okay, for letter in input string. - 1:28:48 Now, let's do output is that. - 1:28:52 If letter is equal to a, - 1:28:57 output.append z elif letter - 1:29:03 equal to z output.append('a') else, - 1:29:21 Is it always gonna be lowercase? - 1:29:22 It's always gonna be lowercase. - 1:29:23 Okay, else - 1:29:28 Output.append(string.ascii) lowercase, - 1:29:44 String.ascii_lowercase.index[letter] plus 1. - 1:29:55 [MUSIC] - 1:30:00 return.joint(output). - 1:30:03 [MUSIC] - 1:30:13 I don't need this. - 1:30:14 [MUSIC] - 1:30:22 I just need this. - 1:30:24 [MUSIC] - 1:30:27 There we go. - 1:30:28 [MUSIC] - 1:30:34 We're still waiting on you. - 1:30:35 [MUSIC] - 1:30:40 3 is the second number, so goes where number 2 is placed. - 1:30:45 [MUSIC] - 1:30:50 Okay, Daithi, I think I get that. - 1:30:52 So because 3 was the second item, we go back one to get 2, we find where the 2 is? - 1:30:59 [MUSIC] - 1:31:04 Yeah, I got no idea on that one. - 1:31:06 Let's look that up, let's do Python inverse per mutation, right? - 1:31:14 [MUSIC] - 1:31:21 Okay, so let's see what they're doing. - 1:31:22 [MUSIC] - 1:31:24 So they make an array of zeroes, - 1:31:27 that's the length of the permutation, which is fine. - 1:31:32 For each thing in enumerate, yeah - 1:31:34 [MUSIC] - 1:31:38 Inverse, whatever the letter is equals that. - 1:31:40 Yeah, see that would make so much more sense. - 1:31:43 But they weren't doing that. - 1:31:45 [MUSIC] - 1:31:49 They were doing something weird. - 1:31:52 I lost that one again? - 1:31:53 By three seconds? - 1:31:55 [LAUGH] By three seconds, really? - 1:31:58 [MUSIC] - 1:32:00 I lost by three seconds. - 1:32:02 Aw man, that's ridiculous. - 1:32:06 That's absolutely ridiculous. - 1:32:09 All right we got time. - 1:32:10 Send me another one or somebody else send me one and - 1:32:12 let's answer some more questions. - 1:32:14 Somebody send, give me some more questions. - 1:32:18 [MUSIC] - 1:32:21 And we'll talk about whatever. - 1:32:25 Deiphy, do you think you've got something? - 1:32:29 What's, So see, Sagar, I don't get that one, right? - 1:32:38 1, 2, 3, 4, 5, The 2 is in position 1. - 1:32:52 How does it go to the end? - 1:32:54 [MUSIC] - 1:33:00 So zeros index stays. - 1:33:02 All the other items are placed inversely in terms of indexing. - 1:33:06 [MUSIC] - 1:33:10 Okay, pause, - 1:33:13 let's do it pause, let's do it. - 1:33:21 This could be a really good way to just test things to play with stuff. - 1:33:27 All of you all have way more stars than me, and - 1:33:29 it's because I don't ever play with this. - 1:33:31 But you also all smart cuz Python 2 And that's just ridiculous. - 1:33:37 [MUSIC] - 1:33:38 Okay, given integer n, find n factorial. - 1:33:41 Okay, so factorial of n blah, blah, blah. - 1:33:45 So is this a bug fixed one. - 1:33:46 Okay, so yeah, so we do that. - 1:33:50 I got that one, okay. - 1:33:52 I've done factorials before - 1:33:53 [MUSIC] - 1:33:58 Yeah, if you go ahead and explain I would love - 1:34:04 to hear an explanation because I am so lost on that. - 1:34:10 I am also lost on code fights, - 1:34:13 it is not not letting me start these fights. - 1:34:18 [LAUGH] Don't let me start fights, that sounds horrible. - 1:34:22 But yeah, y'all know what I mean. - 1:34:27 Why is programming not in our life for real, - 1:34:29 why are there no programs to see if everyone is doing their job or not? - 1:34:33 I'm kinda glad there's not. - 1:34:36 Humanoid, you're marked as Python 2. - 1:34:39 You better fix that, fix it. - 1:34:42 I'm kinda glad that there's not. - 1:34:44 As much as programming in real life would be fun for - 1:34:48 being able to apply functional programming to some job I have to do and - 1:34:52 Python washes all the dishes for me or whatever. - 1:34:58 Yeah, I'm not all that stuck on like, I want there to be coding to monitor me. - 1:35:06 There already is coding to monitor me all day, every day. - 1:35:10 But [LAUGH] I don't want that to be Something I have to actually deal with. - 1:35:16 It's just the default, and you never bothered to change it. - 1:35:18 Well, that's just unacceptable. - 1:35:20 You gotta have Python 3. - 1:35:22 We gotta show that Python 3 is better and - 1:35:26 larger and bigger than Python 2 because it is. - 1:35:33 All right, we're gonna do this pause as soon as code fights plays along. - 1:35:42 I'm actually really glad programming is not in real life. - 1:35:44 Programming is already taken too seriously already. - 1:35:49 If n is equal to zero, return n Return n. - 1:35:57 Otherwise, we need to return - 1:36:03 Else return n- 1. - 1:36:19 I can't edit all those lines. - 1:36:22 I keep forgetting that the first one is a bug fix. - 1:36:25 I keep forgetting that the first one is a bug fix. - 1:36:29 This should be *= - 1:36:39 Yeah, see look, this says I'm already at two minutes. - 1:36:41 I'm nowhere near two minutes. - 1:36:43 The sample tests aren't even loaded James, what is my most proud Python project? - 1:36:52 My most proud Python project is probably a package called Jinja Braces. - 1:36:57 It's a bunch of code a friend and I wrote for Jango's class base views. - 1:37:05 So it's a lot of stuff for Bringing in - 1:37:09 a lot of functionality into the class face views without having to write at yourself. - 1:37:13 It's pretty popular, or was at one time. - 1:37:16 And yeah, it's pretty neat. - 1:37:20 Yeah see, this is ridiculous. - 1:37:22 It's telling me I'm at two minutes and forty something seconds. - 1:37:25 This thing right here doesn't even exist. - 1:37:30 Code Fights is ridiculous. - 1:37:33 Dythe has an explanation for us. - 1:37:35 So let's see here. - 1:37:36 One is in position one so it stays in position one. - 1:37:39 Okay, we all understand and agree with that one. - 1:37:42 Three is the second number so it goes where ever number two is placed. - 1:37:47 I get you, okay. - 1:37:49 Four is the third number so, - 1:37:50 it goes wherever number three is originally placed. - 1:37:52 We'd go so on and so forth. - 1:37:55 I got ya, okay. - 1:37:57 That makes sense. - 1:37:59 That makes a lot of sense. - 1:38:01 The other stuff is a little weird. - 1:38:06 I'm still trying to figure that one out. - 1:38:08 Cool, I gotcha now. - 1:38:10 [MUSIC] - 1:38:14 Yeah, I kind of wanna try that one again, but- - 1:38:17 [MUSIC] - 1:38:25 This is ridiculous. - 1:38:27 [LAUGH] What is up with code fights? - 1:38:30 Okay, let's unblock their stuff and see if that makes it work better. - 1:38:35 I won't block anything, code fights, you can show me your stupid ads or whatever. - 1:38:39 And we'll see if that works better. - 1:38:42 Cuz I have not spent four minutes on this. - 1:38:44 Y'all have seen this. - 1:38:46 I'm sorry. - 1:38:47 Dahee, okay. - 1:38:49 Dahee, that's cool. - 1:38:50 I will do my best to remember that Dahee. - 1:38:56 All right, So four and a half minutes later. - 1:39:05 We still haven't loaded the fight. - 1:39:07 [LAUGH] That's just ridiculous, what is, - 1:39:12 okay, I'm just gonna get out of that, okay. - 1:39:18 Code fights, fix your sight y'all. - 1:39:21 I know you're watching This is the most important stream ever for code fights. - 1:39:25 I know code fights is paying attention to this, - 1:39:27 so that's why I'm looking at the camera. - 1:39:29 Codefights, fix your site. - 1:39:33 Look at this, this is, look at this, look a this mess. - 1:39:36 It's not even loading, it's like five minutes. - 1:39:43 Humanoid I see the chat probably a little bit - 1:39:45 faster before it shows up on the screen, but not by much. - 1:39:49 It definitely takes it a little while to show up. - 1:39:53 BlameYouTube and Twitch. - 1:39:56 And beam a little bit because they - 1:40:01 all have to show up at the same time. - 1:40:06 Okay, so I said this was times equals, - 1:40:09 which I can't type an equals sign to save my life. - 1:40:13 Run those sample tests. - 1:40:15 [SOUND]? - 1:40:22 [SOUND] - 1:40:33 Yeah. - 1:40:46 [SOUND] I know how a factorial works, I think. - 1:40:49 So in 5, we start at 5, immediate five, - 1:40:53 and doesn't equal zero, so then we would return. - 1:40:58 [SOUND] - 1:41:03 This is one! - 1:41:10 [SOUND] You wanna not. - 1:41:16 [SOUND] Can only change one, okay. - 1:41:17 Reset, whatever. - 1:41:19 [SOUND] - 1:41:22 There we go. - 1:41:31 Okay, that was not five minutes and something. - 1:41:33 You all know that. - 1:41:34 James Scott, I'm using an app called OBS and a service called Restream. - 1:41:39 Restream.io lets me send the one stream out to multiple end points. - 1:41:47 Okay, given the number of green apples on a shelf and a total number of apples, - 1:41:50 return the percentage, excuse me, of green apples. - 1:41:54 Really? - 1:41:57 That sounds easier than it should be. - 1:42:02 [LAUGH] But let's try it. - 1:42:06 Okay, so return total divided by green times 100. - 1:42:14 And it looks like you want an int out of that, so int. - 1:42:19 [SOUND] I don't know, - 1:42:24 that should be green out of total. - 1:42:32 What am I doing? - 1:42:35 [SOUND] Look at me failing. - 1:42:40 Failing fourth grade math. - 1:42:47 All right, so the longest diagonals of a square matrix are defined as follows. - 1:42:53 The first longest diagonal goes from the top left corner to the bottom right one. - 1:42:57 The second longest diagonal goes from the top right corner to the bottom left one. - 1:43:01 Given a square matrix, - 1:43:02 your task is to reverse the order of elements on both of its longest diagonals. - 1:43:08 Reverse the order of elements on both of its longest diagonals. - 1:43:13 Okay, so that means I want to take the first item from the first list and - 1:43:19 swap it with the last item from the last list and - 1:43:22 then the last item from the first list with the first item from the last list. - 1:43:26 The middle item stays the same. - 1:43:29 Okay, that seems simple enough. - 1:43:32 Okay, so let's look back over here. - 1:43:35 Poss wants to practice a bit more. - 1:43:38 You're fine. - 1:43:39 Don't worry about it. - 1:43:39 You're doing great. - 1:43:42 James has an app OBS, cool. - 1:43:44 Why don't we keep C++ or C at Treehouse? - 1:43:48 We don't really find them necessary. - 1:43:52 It's friendly enough but it just doesn't need to be. - 1:43:57 Okay, so the length of - 1:44:01 the matrix is always, okay. - 1:44:07 [SOUND] But they're not always the same - 1:44:12 [MUSIC] - 1:44:15 Aha, okay. - 1:44:17 So, diagonals. - 1:44:22 [SOUND] All right, so - 1:44:27 let's get diagonals = 0. - 1:44:33 It's always gonna be 0. - 1:44:34 [SOUND] I wanna make pairs of these. - 1:44:37 0 and then - 1:44:42 matrix. - 1:44:47 [MUSIC] And then matrix- 1. - 1:44:50 [SOUND] Right? - 1:44:54 I may be doing this in a really long, drawn out way. - 1:44:58 [SOUND] So then. [MUSIC] - 1:45:01 Yeah, yeah, yeah, let's do this. - 1:45:06 So for x. - 1:45:07 [MUSIC] - 1:45:12 No. - 1:45:12 For I in - 1:45:15 range(len(matrix))- - 1:45:25 1. - 1:45:27 [MUSIC] - 1:45:41 .append and - 1:45:44 we're going to append i and - 1:45:50 then len(matrix)- i + 1. - 1:45:56 [MUSIC] - 1:46:02 So I'll break this onto a new line so y'all can see what I'm doing. - 1:46:06 So, we're appending. - 1:46:07 In this case, it would be 0 and then 9, right? - 1:46:14 [MUSIC] - 1:46:25 Or we just want I + 1, - 1:46:27 [MUSIC] - 1:46:31 Times -1 which I wanna reverse that. - 1:46:35 [MUSIC] - 1:46:37 Okay, so I think that will work. - 1:46:40 So the new_matrix = this. - 1:46:47 It's an empty list. - 1:46:50 [SOUND] for - 1:46:53 m in matrix. - 1:46:58 So for each item in the matrix, I want an index on that too, enumerate. - 1:47:04 [MUSIC] - 1:47:21 new_matrix. - 1:47:22 [MUSIC] - 1:47:31 new_line =. - 1:47:32 [MUSIC] - 1:47:41 I am lost on this one. - 1:47:42 I need code to sit down and think this one through but, okay. - 1:47:49 So I want to take the first item on the first one. - 1:47:56 [MUSIC] - 1:48:07 So that's gonna be equal to m. - 1:48:11 Whatever is in m, and then we wanna take new_line. - 1:48:16 [MUSIC] - 1:48:21 And we wanna take - 1:48:25 diagonals I, 0. - 1:48:30 So whatever that item is, and we want that to be equal to. - 1:48:37 [MUSIC] - 1:48:41 We don't even need that. - 1:48:42 [MUSIC] - 1:48:45 We don't even need that. - 1:48:46 Okay, so we can just do this. - 1:48:48 We can do matrix, which means really I can do this as for In, - 1:48:54 [MUSIC] - 1:48:57 range(len(matrix)), right? - 1:49:03 So, matrix, the one that we're on. - 1:49:10 [SOUND] The zeroth item. - 1:49:12 Whatever item comes out of diagonals. - 1:49:16 There. - 1:49:17 [MUSIC] - 1:49:32 Is equal to matrix. - 1:49:34 [MUSIC] - 1:49:40 I + 1 times -1, - 1:49:48 diagonals I, 1 - 1:49:56 [MUSIC] - 1:50:04 I don't think I'm quite right on that one. - 1:50:07 [MUSIC] - 1:50:21 Okay, That should always go to the other side. - 1:50:29 [MUSIC] - 1:50:36 >> That should always get the last item, right? - 1:50:38 And I don't need new_matrix. - 1:50:43 [MUSIC] - 1:51:23 The code's fine. - 1:51:23 This should be minus one. - 1:51:31 So some of these I got completely wrong. - 1:51:35 Yeah, ha ha, end is going to be equal to matrix. - 1:51:45 Matrix i plus 1 times - 1:51:50 negative 1 diagonals, i negative 1. - 1:51:57 Okay, so that's the last value, right? - 1:52:02 So, and then let's say start is equal to matrix - 1:52:10 I diagonals I zero, - 1:52:15 okay, so that's the start value and that's the last value. - 1:52:19 So the start value is gonna be equal to end. - 1:52:25 And the last value here is gonna be equal to start. - 1:52:29 And run those tests. - 1:52:33 No. - 1:52:43 I forgot to get the last one. - 1:52:50 Okay, I'm- [SOUND] Okay, so - 1:52:55 I also have to get, The last one on the end. - 1:53:01 Can you create a string and swap the numbers there and - 1:53:04 recreate a square matrix? - 1:53:06 I probably could on that one. - 1:53:10 All right lets see if I can remember - 1:53:14 it was [INAUDIBLE] Did I get that one right? - 1:53:21 I could probably do that one. - 1:53:25 Strings, swap the numbers, then recreate a square matrix. - 1:53:28 It's not any easier with a string than it is with a list, so - 1:53:39 So it's fine, okay. - 1:53:40 Here, we'll do this. - 1:53:41 We'll do Start Left, End Right, and - 1:53:47 we'll do Start Right and End Left. - 1:53:51 So Start Right Is gonna be equal to matrix, i, - 1:53:58 so whatever road we're on, -1, right? - 1:54:04 Always, and this is gonna be equal to matrix - 1:54:08 [MUSIC] - 1:54:14 Here, - 1:54:14 [MUSIC] - 1:54:19 Zero always, right? - 1:54:22 So then this one here ends up equaling - 1:54:27 and_right and matrix i- 1, - 1:54:31 it's not always gonna be- 1 though. - 1:54:45 That's gonna be equal to that. - 1:54:45 This one, - 1:54:54 Here This - 1:54:59 last one will be equal to start left, this one will be equal to start right. - 1:55:04 This isn't exactly right. - 1:55:07 This is totally wrong. - 1:55:16 Invalid syntax, yeah you're right. - 1:55:19 [MUSIC] - 1:55:42 How did that come out as 4? - 1:55:44 Because of that -1, So. - 1:55:55 All right, I'm not worried about this one, - 1:55:58 [LAUGH] I'm giving up on this one, you beat me on this, Humanoid, - 1:56:07 we'll find out what happens later, So, yeah so we're at the end of a stream. - 1:56:14 Let's take questions. - 1:56:17 Anybody have questions let me know. - 1:56:20 If I have another code fight here I will check it out later. - 1:56:29 Okay, it's just that one. - 1:56:31 Anybody that wants to challenge me go for - 1:56:33 it, I don't know how often I'll be checking out, but I will later so - 1:56:37 we'll talk about that. - 1:56:39 If you have any questions ask them now I'll answer them before the end of - 1:56:42 the stream. - 1:56:43 And Critical Toxic would like me to do more discord bots. - 1:56:46 I think I will actually. - 1:56:49 It's a new level, woo. - 1:56:51 And I lost because that totally took me 6 minutes and 53 seconds. - 1:56:54 Totally, 100%, right humanoid? - 1:56:56 There's no way that didn't take me seven minutes. - 1:57:00 Good fight. - 1:57:03 Okay, whatever. - 1:57:06 Anyway, so I'll do more discord bots. - 1:57:11 I wanna do another discord bot. - 1:57:12 I really wanna do one that's over the Voice feature of discord. - 1:57:19 But that's gonna be a little ways off. - 1:57:20 We'll get to that later. - 1:57:22 But yeah I definitely wanna do more discord bots. - 1:57:25 If i remember correctly my very first Stream that I did was a Discord bot. - 1:57:36 No, that was the Django travel blog, right that was something different. - 1:57:40 Anyway though I will do more Discord bots. - 1:57:44 I will be streaming over the weekend I will be doing - 1:57:49 Some streams for the Ludum Dare that I'm doing. - 1:57:55 So if you want to watch me do panicky Rust code and - 1:57:59 creative sprites and maps and things like that, be sure to tune in. - 1:58:03 That will be one my own Beam channel, though. - 1:58:05 That will be at theme.pro/kennethlove, it won't be here on treehouse. - 1:58:12 I will not be streaming on treehouse, - 1:58:13 I'll just be streaming on my own personal channel. - 1:58:15 So if you wanna come watch that, feel free. - 1:58:17 You can come and watch. - 1:58:19 I'd love to have you there. - 1:58:20 I'll have chat open. - 1:58:22 We can talk about stuff. - 1:58:23 It may be just music playing or something, or it may be Something else. - 1:58:29 I'm not sure. I don't know if - 1:58:30 there will be any audio or not. - 1:58:31 You may just get to watch me type and put down pixels. - 1:58:34 So, thank you all for watching. - 1:58:36 Glad to have had you here. - 1:58:38 Tune in next week. - 1:58:39 We'll do something else. - 1:58:40 I think I'm going to start doing. - 1:58:41 I want to start playing around with languages, so if you all would like to see - 1:58:45 me Do tutorials, practice learn different languages. - 1:58:49 Let me know what different languages you like to see and maybe I'll do some. - 1:58:52 It will be a lot of mistakes. - 1:58:56 Many mistakes will be made, but that's fine. - 1:58:59 Mistakes are how we learn. - 1:59:00 So, thank you all for watching and I will see you all next week. - 1:59:05 Have a great weekend.
https://teamtreehouse.com/library/practice-projects-and-code-fights
CC-MAIN-2018-13
refinedweb
17,100
92.42
cd hw/xfree86/os-support/linux 1) It seems an error slipped into the patch I submitted for bug #4511. It results in Makefile parsing errors. Sorry about that :( The patch adding the necessary '\' is included here: --- Makefile.am 2005-10-17 09:18:58 +0200 +++ /var/tmp/portage/xorg-server-0.99.2-r1/work/xorg-server-0.99.2/hw/xfree86/os-support/linux/Makefile.am 2005-10-30 09:47:48 +0100 @@ -4,8 +4,8 @@ PLATFORM_PCI_SUPPORT = $(srcdir)/../shared/ia64Pci.c endif if LINUX_ALPHA -PLATFORM_PCI_SUPPORT = lnx_ev56.c - $(srcdir)/lnx_axp.c +PLATFORM_PCI_SUPPORT = lnx_ev56.c \ + $(srcdir)/lnx_axp.c \ $(srcdir)/../shared/xf86Axp.c endif 2) An include file that I believe was found by the build system earlier, is not found anymore. (../shared/xf86Axp.h) --- lnx_axp.c 2005-07-03 10:53:46 +0200 +++ /var/tmp/portage/xorg-server-0.99.2-r1/work/xorg-server-0.99.2/hw/xfree86/os-support/linux/lnx_axp.c 2005-10-30 10:13:06 +0100 @@ -9,7 +9,7 @@ #include "os.h" #include "xf86.h" #include "xf86Priv.h" -#include "xf86Axp.h" +#include "shared/xf86Axp.h" axpDevice lnxGetAXP(void); I believe the same problem affects lnx_ia64.c, where #include "ia64Pci.h" is attempted, but I haven't got an ia64 to actually confirm that. Thanks! Created attachment 3815 [details] [review] Two fixes put together in a simple patch Just a reminder. This short patch applies cleanly against *** Bug 5096 has been marked as a duplicate of this bug. *** applied the modular build part of this. can you verify that 6.9rc2 plus the change to lnx_axp.c still builds? i don't want to break monolith builds just to fix modular builds. I used, well, at least the Gentoo-package associated with that. And I disabled compiling $(KBDSRC) in the directory that also contains lnx_axp.c (sorry, but something went wrong in linux/keyboard.h, and I don't know what, this is all irrelevant probably, but being thorough is probably better). I repeatedly applied and dis-applied the patch for lnx_axp.c, and recompiled lnx_axp.o, it gave me no problems at all. Still waiting on the other half of the patch.. monolith side applied, thanks! bug/show.html.tmpl processed on Jul 23, 2016 at 13:07:09. (provided by the Example extension).
https://bugs.freedesktop.org/show_bug.cgi?id=4928
CC-MAIN-2016-30
refinedweb
382
55.1
Developer's Corner From pdl This is meant to be a collection of ideas currently floating around the PDL mailing list for current and future development. Survey Results Read the Results from the Fall 2009 Usage and Installation Survey. PDL Way Forward All, I've consolidated the topics and discussion from perldl and pdl-porters from the past few weeks as part of an effort to produce a PDL_way_forward_draft. It still needs some corrections to wiki as well as some reorganization. However, the summary list should, at least, be a single place to look for the ideas and consensus as I saw it forming. I see this as a scratch page of specific tasks to move forward with PDL. The list is long but there are some major thoughts and key ideas that stick out: - Better Handling of External Dependencies - General Thoughts - Padre Development for Strawberry Perl Pro Release - Computational Improvements - Documentation and Usability Fixes - Modularize PDL - Coordinate PDL Plans with Perl6 - Build Process Fixes - Provide a Baseline Default 2D and 3D Graphics Capability Development Ideas Splitting PDL into Components There seems to be a growing consensus that PDL should be split into a solid, installable-everywhere core, and a handful of supporting modules. The module would be easily installable using the CPAN Bundle namespace, in which case a person wanting to use PDL for their own work would install Bundle::PDL instead of just PDL since the latter would only contain the core. As first steps in this process, we're going to try doing this with PLplot: - Move PLplot out of the core - Package PLplot and create something of a migration guide, including necessary steps with git so that moving future packages out of the core is straightforward - Repeat for PGPLOT, OpenGL, and other packages deemed 'not necessary for core functionality' [David/run4flat] I think we should also write a rock-solid Alien::PLplot package that will interface with various OS's package managers to ensure that PLplot is installed using the native package management, or download and compile the source if the native package management does not include PLplot or is unknown. Similar Alien packages can and should be written for OpenGL, PGPLOT, and whatever else gets split off the core. Interface OpenGL and PLplot In order to guarantee that PLplot works on any machine, it was suggested that somebody write an OpenGL driver for PLplot. In this way, ensuring that PLplot has a consistent, reliable interface boils down to ensuring that OpenGL properly installs. What should be in the core An area that needs to be discussed more. Here's what we know: - It should have some use beyond just threading. For example, it should probably have linear fitting capabilities - It should install almost everywhere the Perl can install - OS Packagers have observed that breaking PDL into components should be coarse-grained, not fine grained, to minimize version skew issues. For example, file IO is something that could easily go either way, but should probably be in the core. Padre integration - PDL REPL (Read-Eval-Print-Loop), i.e. perldl shell in Padre - PDL function look-up as part of the Padre help system
http://sourceforge.net/apps/mediawiki/pdl/index.php?title=Developer's_Corner&oldid=223
CC-MAIN-2013-48
refinedweb
528
51.11
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 0.92.0 - - - Labels:None - Hadoop Flags:Reviewed - Release Note:HideToShowTo Description. Issue Links - is related to HADOOP-7646 Make hadoop-common use same version of avro as HBase - Closed Activity - All - Work Log - History - Activity - Transitions @Joep Any chance of seeing the compile error? How comes trunk compiles against hadoop 0.20.x w/o need of annotations? Is it because of the 0.20 profile? If we do a 0.22 profile, option 2., why we need hadoop-annotations? I'm wondering because I seem to remember compiling against 0.22 at one time w/o this dependency; maybe I was dreaming. I added a separate 0.22 profile without annotations. Build is fine. And no issue found so far while running hbase trunk on hadoop 0.22. Testing a patch locally, will post soon. Initially I thought that HBase depended on hadoop-annotations, but that is a problem with 0.23 (probably a missed dependency in those POMs). Still seeing compilation error: [ERROR] Failed to execute goal on project hbase: Could not resolve dependencies for project org.apache.hbase:hbase:jar:0.91.0-SNAPSHOT: The following artifacts could not be resolved: org.apache.hadoop:hadoop-test:jar:0.22.0-SNAPSHOT, org.apache.hadoop:guava:jar:r09: Could not find artifact org.apache.hadoop:hadoop-test:jar:0.22.0-SNAPSHOT in apache release () -> [Help 1] The guava one is related to HDFS-2189 (and HDFS-2214), which for some reason keeps rearing it's ugly head. Just wiped out ~/.m2/repository. Will download the offending POM manually to double-check. Problem still there on the Hadoop side: The POM still points to org.apache.hadoop#guava and it should be com.google.guava. The source code in hdfs is correct, but the last published build is stale. Preliminary patch. Still need to resolve downstream hdfs dependency issue first. Asked Konstantin to initiate integration build on hadoop-0.22 in order to publish fixed jar+pom. Once in I need to check where the hadoop-test error comes from. Here is my patch. Joep and Michael. You fellas are together? Should I apply Michael's last patch? It looks good to me. Do you want to make sure it works w/ whats published for hadoop 0.22 first? (Thanks for doing this work). My preference is to have the hdfs pom fixed first, then I'll test the patch and confirm. Turned out that HDFS was broken. I filed a bug for it: HDFS-2315. The newly published POM seems to have fixed the HDFS guava dependency issue : I'll test the patch again and report back. Added additional comment and now that HDFS dependencies are fixed, the patch can now be applied. I applied the above patch and got this: [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure /Users/stack/checkout/clean-trunk/src/main/java/org/apache/hadoop/hbase/Server.java:[22,29] package org.apache.hadoop.conf does not exist /Users/stack/checkout/clean-trunk/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java:[29,29] package org.apache.hadoop.conf does not exist /Users/stack/checkout/clean-trunk/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java:[30,27] package org.apache.hadoop.fs does not exist /Users/stack/checkout/clean-trunk/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java:[31,27] package org.apache.hadoop.fs does not exist /Users/stack/checkout/clean-trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java:[37,27] package org.apache.hadoop.io does not exist /Users/stack/checkout/clean-trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java:[38,27] package org.apache.hadoop.io does not exist /Users/stack/checkout/clean-trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java:[66,33] cannot find symbol symbol: class Writable public class KeyValue implements Writable, HeapSize { .... Is it because of this change? <id>hadoop-0.20</id> <activation> <property> - <name>!hadoop23</name> + <name>hadoop20</name> </property> </activation> Uhm, yes. Before there were two options: either specify nothing, or specify -Dhadoop23. The nothing/default option was run against 0.20 and was done when hadoop23 was not specified. Now we have three options. Let me see if we can make it something like: !(hadoop22 || hadoop 23) I tried with the specific options, but not without any. Will get back to you.... That should work. We have a build against 0.23 up on apache build box. I'd need to change the config. there on commit but that should be fine. Committed to TRUNK. I tried it w/o a profile and for 0.23. Todd, I updated build.apache.org so that our 0.23 build now uses this new flag instead. Thanks for the patch Joep. Nice on. Integrated in HBase-TRUNK #2180 (See) HBASE-4327 Compile HBase against hadoop 0.22 stack : Files : - /hbase/trunk/CHANGES.txt - /hbase/trunk/pom.xml This issue was closed as part of a bulk closing operation on 2015-11-20. All issues that have been resolved and where all fixVersions have been released have been closed (following discussions on the mailing list). I'm happy to provide a patch. I see three options: 1) modify the POM to provide a separate property for the hadoop-annotation dependency version. That way one can depend on hadoop-0.22-SNAPSHOT (or whatever other 0.22 version) for hadoop-common, hadoop-hdfs, etc, and one could grab hadoop-annotations from 0.23 or 0.24. 2) modify the POM to have an entirely separate 0.22 profile. Again the profile would have to point to hadoop-annotation from a 0.23 or 0.24/trunk snapshot. 3) Not modify the pom, manually build hadoop-annotations from trunk with a modified / faked 0.22 version and deploy to local repo. This would solve my own internal problem, but is not likely to be a satisfying solution for uploading to the apache repo's (not sure if any committer is willing to upload such hadoop-annotation-0.22-SNAPSHOT.jar). Any suggestions?
https://issues.apache.org/jira/browse/HBASE-4327
CC-MAIN-2017-09
refinedweb
1,026
63.05
I am new to SublimeText as well as Python programming so forgive me if I am missing something that should be obvious. I am trying to write a plugin for Sublime Text 1 that inserts a timestamp using this as a reference. My point of confusion is, how do I bind my newly-created plugin to a key command. The samply plugin runs using - sublimeplugin class InsertTimestampCommand(sublimeplugin.TextCommand): def run(self, edit): #grab the active view view = sublime.activeWindow().activeView() #generate the timestamp timestamp_str = datetime.datetime.now().isoformat(' ') #for region in the selection #(i.e. if you have multiple regions selected, # insert the timestamp in all of them) for r in view.sel(): #put in the timestamp #(if text is selected, it'll be # replaced in an intuitive fashion) view.erase(edit, r) view.insert(edit, r.begin(), timestamp_str) Then what would my key binding look like and how do I know this information? Thank you very much and I look forward to learning a lot from all of you. Javier
http://www.sublimetext.com/forum/viewtopic.php?f=3&t=2104&start=0
CC-MAIN-2015-11
refinedweb
172
58.38
> Introduction Running Couchbase as a Docker container is fairly easy. Simply inherit from the latest, official Couchbase image and add your customized behavior according to your requirement. In this post, I am going to show how you can fire up a web application using Spring Boot, Vaadin, and of course Couchbase (as backend)– all using Docker. This is part one of a two-part series where I am going to describe ways to run a fully featured web application powered by Couchbase as the NoSQL backend using Docker toolsets. In this post, I will describe the steps to set up and configure a Couchbase environment using Docker; I will also mention ways to Dockerize the web application (in this case, it’s a Spring Boot application with Vaadin) and talk to the Couchbase backend for the CRUD operations. Prerequisites Docker needs to be set up and working. Please refer to the following link for details of the installation: If you are on macOS or Windows 10, you can go for native Docker packages. If you are on an earlier version of Windows (7 or 8) like me, then you can use Docker Toolbox which comes with Docker achine. The Application Ours is a simple CRUD application for maintaining a bookstore. Users of the application can add books by entering information such as title and/or author, and can then view the list of books, edit some information, and even delete the books. The app is built on Spring Boot. The backend is powered by Couchbase 4.6, and for the front-end I have used Vaadin 7 since it has pretty neat integration with the Spring Boot framework. The main steps to build this app are listed below: - Run and configure Couchbase 4.6, including setting up the bucket and services using Docker. - Build the application using Spring Boot, Vaadin, and Couchbase. - Dockerize and run the application. Run Couchbase 4.6 using Docker Check your Docker host IP. You can use: The next task is to write the Dockerfile to run and configure Couchbase. For our application to talk to the Couchbase backend, we need to set up a bucket named “books” and also enable the index query services on the Couchbase node. The Dockerfile to all of this can be found here. The Dockerfile uses a configuration script to set up the Couchbase node. Couchbase offers REST endpoints that can easily enable services such as querying, N1QL, and index. One can also set up buckets using these REST APIs. The configuration script can be downloaded from here. Let’s try to build and run the Couchbase image now. Go to the directory where the Dockerfile is. REPOSITORY TAG IMAGE ID CREATED SIZE chakrar27/couchbase books 93e7ba199eef 1 hour ago 581 MB couchbase latest 337dab68d2d1 9 days ago 581 MB Run the image by typing Sample output: Verify the configuration by typing into your favorite browser. alt="Configuration" width="899" height="391" data-recalc-dims="1" data-src="" data- data-sizes="" data-swift-image-lazyload="true" data-style="" style="height:391px" data-l> Type “Administrator” as Username and “password” in the Password field and click Sign In. Check the settings of the Couchbase node and verify that it is according to the configure.sh we used above. alt="Couchbase Setting Cluster Ram Quota" width="607" height="323" data-recalc-dims="1" data-src="" data- data-sizes="" data-swift-image-lazyload="true" data-style="" style="height:323px" data-l> The bucket “books”. alt="Data bucket settings" width="701" height="323" data-recalc-dims="1" data-src="" data- data-sizes="" data-swift-image-lazyload="true" data-style="" style="height:323px" data-l> At this point our back-end Couchbase infrastructure is up and running. We now need to build an application that can use this backend to build something functional. Build the application using Spring Boot, Vaadin, and Couchbase Go to start.spring.io and add Couchbase as a dependency. This would place spring-data-couchbase libraries in the application classpath. Since Couchbase is considered a first-class citizen of the Spring Boot ecosystem, we can make use of the Spring Boot auto-configuration feature to access the Couchbase bucket at runtime. Also, add Vaadin as a dependency in the project. We are going to use it for building the UI layer. The project object model (pom) file can be found here. We create a Couchbase repository like this: The annotations ensure that a View named “book” will be supplied at runtime to support view-based queries. A primary index will be created to support N1QL queries. In addition, a secondary index will also be provided. The methods have been defined to return List<Book>. We don’t have to provide any implementation since that is already provided behind the scenes by the spring-data-couchbase. We need to define the entity, which in our case is Book. We annotate it with @Document. @Document To enable auto-configuration, use application.properties or application.yml file as shown below: One thing to note here is that when the application container runs, it would need to connect to the Couchbase container and set up the auto-configuration. The property spring.couchbase.bootstrap-hosts lists the IP address of the Couchbase node. Here, I have specified 127.0.0.1 which is not going to work since at runtime, the app container will not find the Couchbase container running in that IP. So we need to pass an environment variable (env variable) when running the Docker image of the application. In order to pass an env variable as mentioned above, we need to write the Dockerfile of the application such that the value of the spring.couchbase.bootstrap-hosts property can be passed as an env variable. Here’s the Dockerfile of the app: As you can see, we are basically overriding the value of the spring.couchbase.bootstrap-hosts property defined in the application.properties file by the env variable HOSTS. This is pretty much all we have to do to wire Spring Boot with Couchbase. UI (U and I) For UI, we make use of the spring-vaadin integration. I am using version 7.7.3 of Vaadin, vaadin-spring version 1.1.0, and “Viritin,” a useful Vaadin add-on. To install Viritin, add the following dependency: Annotate the UI class as @SpringUI @Theme(“valo”) public class BookstoreUI extends UI { ////// } And then hook the repository methods with the UI elements. A bean that implements the CommandLineRunner interface is used to prepopulate the database with some initial values. For full source code, refer to this link. Dockerize the application Using Maven, it’s very easy to Dockerize an application using Spotify’s docker-maven plugin. Please check the pom.xml file plugin section. Alternatively, you can build using Docker command line -> And then run the image -> Note that we need to pass the value of the variable HOSTS that our app container is going to look for when it tries to connect to the Couchbase container. The run command would look like: Once the application is started, navigate to The following page shows up: alt="pasted image 0 2" data-recalc-dims="1" data-src="" data- data-sizes="" data-swift-image-lazyload="true" data-style="" style="height:428" data-l> An entry can be edited and saved. alt="pasted image 0 1" data-recalc-dims="1" data-src="" data- data-sizes="" data-swift-image-lazyload="true" data-style="" style="height:570" data-l> There’s also a neat filtering feature provided by the N1QL query running underneath. alt="pasted image 0 3" data-recalc-dims="1" data-src="" data- data-sizes="" data-swift-image-lazyload="true" data-style="" style="height:309" data-l> Users can also add a new book and delete an existing record. All the CRUD (Create/Read/Update/Delete) features of this simple application are powered by Couchbase N1QL queries, which we enabled by creating the “BookStoreRepository,” and, in turn, extends the “CouchbasePagingAndSortingRepository.” This post is part of the Couchbase Community Writing Program One Comment […] run a Couchbase powered, fully functional Spring Boot web application using the Docker toolset. In part one of the series, I demonstrated how to run two Docker containers to run a functional application with […]
https://blog.couchbase.com/docker-vaadin-meet-couchbase-part1/
CC-MAIN-2022-33
refinedweb
1,379
55.84
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Module create new model - no table create I'm trying to learn OpenERp code logic... I would like to write a new module, and need a new model created.. This is res_certificates.py code from openerp.osv import fields, osv, orm class res_certificates(osv.Model): _name='res_certificates' _description='Stored Certificates' _order='name' _columns = { 'name': fields.char('Name', size=128, help="Internal name for certificate", reguired=True, select=True), 'cert': fields.text('Certificate', help='Certificate (text)') , 'cert_password': fields.char('Certificate Password', size=64), 'cert_key': fields.text('Private Key', help="Private key for user"), 'key_password':fields.char('Private Key Password', size=64) } res_certificates() called from __init__.py as import res_certificates does not create any table table (note: at the moment i just need the table. no views) working on v7.0 If anyone could tell me what am i missing here?? Replace _name='res_certificates' with _name='res.certificates' Then restart and update the module again. Here is the link to the Documentation of objects, fields and methods. However there is dedicated section for naming conventions. would appriciate some usefull link on naming conventon here, because i saw in some modules naming _name='res_certificates' works fine??? The ORM converts the . to _ for you automatically when it names the new table. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now Have you updated the module list and installed the module? yes, modude is visible in the list, and install returns : openerp.modules.loading: module my_module: loading objects can you try class res_certificates(osv.osv): ? yes.. tried it olso.. but.. it should be the same.. because... in openerp.osv.osv lines 211-214 states: "# deprecated - for backward compatibility. osv = Model osv_memory = TransientModel osv_abstract = AbstractModel # ;-)"
https://www.odoo.com/forum/help-1/question/module-create-new-model-no-table-create-2421
CC-MAIN-2017-51
refinedweb
326
53.88
compile and run java program compile and run java program Hello, everyone!!! I just want to ask.... For example, I have a java file named Hello.java and saved at my desktop. how... using the java built-in compiler of UBUNTU.. I hope you could help me compile error compile error Hello All for example public class..."); } } i have save this program with A.java. Then I executed this program javac A.java no errors and java A [output Hello World],up to this fine. Then I saved the same IO in java IO in java Your search mechanism will take as input a multi-word... the words specified appear closer to one another. For example, if you search..., document 1 would be given higher weight. 2. Your program will take as input Need help in completing a complex program; Thanks a lot for your help Need help in completing a complex program; Thanks a lot for your help Hi, I am currently working on one program which works in this way. 1... obama". Now when I wrote a sample batch file and tried to pass this string compile this program and let me know the resolution compile this program and let me know the resolution class Maxof2...+" is greater than "+i); } } please compile and let me know the resolution  ... argument. //Converting String format to Integer value int i = Integer.parseInt io io write a program in java to accept a filename from user and print its content on the monitor Hello Friend, Try the following code: import java.io.*; import java.util.*; class DisplayContentOfFile { public Compile error - Java Beginners Compile error I get this error when compiling my program: java:167... This is an example of my code: import java.io.*; import java.util.*; import... standards this will work since i put the bracket on the same line Working With File,Java Input,Java Input Output,Java Inputstream,Java io Tutorial,Java io package,Java io example to work with a file using the non-stream file I/O. The File class is used... system. When a File object is created, the system doesn't check to the existence of a corresponding file/directory. If the file exist, a program can Open Source e-commerce what I say. They may bring up the death of Corel just to get a rise out of me...; Open Source E-Commerce Education Program Zelerate, Inc...Open Source e-commerce Open Source Java FileOutputStream Example Java FileOutputStream Example In this section we will discuss about the Java IO FileOutputStream. FileOutputStream is a class of java.io package which... the above example : 1. First compile the WriteFileOutputStreamExample.java using IO File - Java Beginners IO File Write a java program which will read an input file & will produce an output file which will extract errors & warnings. It shall exclude... from the log file. This is the task assigned to me..Please help me Compile time error Compile time error Hi, When i compile my simple program in cmd am... command, operable program or batch file" How to resolve this problem ???? This means java is not configured properly in your system. Please visit Unable to compile Servlet was created by me. To compile my servlet program I did the following: C:\apache... JAVA_HOME with the value C:\Program Files\Java\jdk1.6.0.14 Then I created...Unable to compile Servlet First I installed the tomcat server in C Java IO Writer Output When you will compile and execute the above example like below...Java IO Writer In this section we will discuss about the Writer class in Java... into the output stream I am giving a simple example. In this example I have JAVA Compile Question ? JAVA Compile Question ? I'm unable to compile this file and not sure..."); } catch(IOException e) { System.out.println("IO Exception"); } } } The classes that you are using Java IO InputStream Example Java IO InputStream Example In this section we will discuss about... created a text file and write some texts into them. In the example I have... example you would be required to compile it first and then execute. An image Server_Gives_Null_Response for connection at port 6066. When i run my client class form different java...Server_Gives_Null_Response I have one InternetServer.java which... port(6066). I have one client class which is in different java project. My Java compile error - Java Beginners Java compile error I am having trouble compiling the Java project... Kindly let me know what is the problem? Secondly, do you know any easy method to compile Java on Mac besides using the Terminal Java IO FilterWriter Java IO FilterWriter In this example we will discuss about the FilterWriter.... In this example I have created a Java class into which tried to write the data into the output stream. In this example I have used the various of Java Open Source E-mail . This point was driven home to me when I spoke recently at a meeting of CIOs, chief... Client for Windows I used to like Thunderbird. A lot. But, now..., a lightweight e-mail client will suffice. I, on the other hand, get hundreds Why do the slashes change when the console gives me the error? when the console gives me the error? I am so confused...Why do the slashes change when the console gives me the error?  ..."; The console gives me back an error saying: java.io.FileNotFoundException: http SequenceInputStream Example Java IO SequenceInputStream Example In this tutorial we will learn about... or concatenate the contents of two files. In this example I have created two text...(Enumeration<? extends InputStream> e) SequenceInputStream(InputStream s1 java IO programing java IO programing how Java source file is used as input the program will echo lines Java IO StringWriter in the Java program. In this example I have created a Java class named...Java IO StringWriter In this section we will discussed about the StringWriter...[]) { String str = "Java StringWriter Example"; try How to compile and run Java program How to compile and run Java program How to compile and Java program..., For compiling and running the Java program from command prompt you should must have JDK... tutorial Compiling and Running Java program from command line. If Java IO concept IO concept Write a java program that moves the contents of the one file to another and deletes the old file. Hi Friend, Try...!"); } } For more information, visit the following link: Java Move File Thanks please give me a java program for the following question please give me a java program for the following question Write a java program that displays multiple frames : Step 1: Design a frame with three... is an example that displays three buttons and display another frame when java i/o - Java Beginners java i/o thnx alot sir that this code helped me much in my program... visit the following link: Thanks... so that i could write it line by line such as- Hello Java in roseindia Hello   Java IO OutputStream Java IO OutputStream In this section we will read about the OutputStream class... to the specified file. In this example I have tried to read the stream of one file...) throws IOException Example : An example Difference between Java IO Class - Java Beginners Difference between Java IO Class What is the difference in function...); int i = fis.read(readData); while (i != -1) { fos.write(readData, 0, i); i = fis.read(readData); } fis.close Java IO StringReader in Java. In this example I have created a Java class named...Java IO StringReader In this section we will discuss about the StringReader...; String str = "\n Java IO StringReader \n"; try IO PipedWriter Java IO PipedWriter In this tutorial we will learn about the PipedWriter...[] ch = {'a','e','i','o','u','r','o','s','e','i','n','d','i... Output When you will execute this example you will get the output Java compile time error. - Java Beginners Java compile time error. CreateProcess error=2, The system cannot..., Please specify in detail and send me code. If you are new java technologies then i am sending you a link. This link will help you. Please visit Java Programming Books As the author of computer books, I spend a lot of time loitering in the computer... this brief: This Java programming stuff is a lot easier than it looks. I'm... application, are part of the successful Java BluePrints program created by Sun how to compile and run servlet program how to compile and run servlet program hello sir/mam i hve... it or compile i m doing it first tyme pls help me in sorting out this problem. i hve seen rose india tips but not getting it pls help please help me in a java program !! - Java Beginners please help me in a java program !! the porgram should use... and run the program" i still need to modify the method kosaraju in the class Graph...://en.wikipedia.org/wiki/Kosaraju_algorithm) there are 4 classes in my program : Vertex Java IO InputStreamReader Java IO InputStreamReader In this section we will discuss about... and convert them to the characters. In this example I have created a class...); } } } } } Output When you will execute the above example you will get anyone to tell me what should i do and what language is best to use in my 4th year project. i want to program an e-mail system for our department.. to conclude, what do i need, am a bit good at visual basic, java, php and xhtml. you Help me to write this simple java frame program Help me to write this simple java frame program I want to write a Java program for.... To create a frame with 4 text fields;name,street,city... and when OK is clicked, corresponding values to be appeared. let it appear When i click on Monitor Tomcat, it shows installed java 7 and tomcat 7, when i click on Monitor Tomcat it shows...When i click on Monitor Tomcat, it shows To run servlet i have seen...' and when i Open your Browser (Google Chrome) and type the following URL : http Java IO PushbackReader . In this example I have created a simple Java class named...Java IO PushbackReader In this section we will discuss about the PushbackReader class in Java. PushbackReader is a class of java.io package that extends Java IO CharArrayWriter Java IO CharArrayWriter In this tutorial we will learn about... the characters, and write the data to the file. For this in the example I have used... }// end class Output When you will execute the above example you will get Java Compiler,Java Compiler Example Compiler. When we write any program in a text editor like Notepad, we use Java compiler to compile it. A Java Compiler javac is a computer program or set... shows the different options of javac tool. Using java compiler to compile java io - Java Beginners io how to give the input like integer data,float data from the keyboard pls give me the information along with the example Hi friend.... Thanks Amardeep java io - Java Beginners Java IO Streams What is the Java IO Stream Java IO PrintWriter the PrintWriter in the Java applications. In this example I have created...Java IO PrintWriter In this tutorial we will learn about the the PrintWriter... on console. PrintWriter is a character based class which makes the Java program easy Pleae help me to give logic and code for this program - Java Beginners ', 'e', 'f', 'g', 'h', 'i', 'j'}; String text = String.valueOf(data...Pleae help me to give logic and code for this program Write... consecutive copies of each letter from the original string. For example java io java io by using java coding how to invoke a particular directory Simple IO Application - Java Beginners Simple IO Application Hi, please help me Write a simple Java application that prompts the user for their first name and then their last name. The application should then respond with 'Hello first & last name, what Free Java Books Yourself Java 2 in 24 Hours As the author of computer books, I spend a lot... into one of their scenes when given a cue; in a Java program, objects enter one... and Java When you set out to design a Java program, you have program java program write a java program to read a file content into a string object using available and read methods of java bufferedinputstream. Please visit the following link: provide me the program for that ques. provide me the program for that ques. wtite a program in java...=input.nextLine(); char ch[]=st.toCharArray(); for(int i=0;i<ch.length;i++){ System.out.println(ch[i]); } } } Thanks Java I/O - Java Beginners Program is not that difficult, go through the given link for Java Example Code run...Creating Directory Java I/O Hi, I wanted to know how to create Exceptional Example in Java Exceptional Example in Java When an error occurs in a method, Java creates an object...; The above code contain a class Exception Unhandled, Which on Compile gives us Java FileInputStream Java FileInputStream In this section we will discuss about the Java IO... of bytes. Syntax : public long skip(long n) throws IOException Example : In this example I have created a class named ReadFileInputStream.java Java finalize() Method Example (); } } } Output When you will compile and execute the above example you will get...Java finalize() Method Example In this section we will read about the finalize... a Java class that holds the I/O devices and required to release after completing Java IO FileReader classes. Example : Here I am giving a simple example which will demonstrate you about how to use java.io.FileReader. In this example I have created a Java...Java IO FileReader In this tutorial we will learn about the FileReader class Help me Help me plz i want code of program to add real numbers and magic numbers in java example this input :- 5+3i-2+3i output:- 3+6i Java IO LineNumberReader Java IO LineNumberReader In this tutorial we will learn about the LineNumberReader in Java. java.io.LineNumberReader class extends...) however, this value is incremented at the every line terminator (\n) when reading compile time error compile time error my program compile successfully and make the class file also but when i tried to do next command to see the output,it showing...; System.out.println("Fibonacci Series: "); for(int i=1;i<=num;i answer me this program answer me this program Java Code to Maintain student record,Name... Average of all marks and display it Java student record import...(System.in); StudentExample data[]=new StudentExample[2]; for (int i=0; i<data.length Program - Java Beginners Java Program Hi, I'm have complications with this program. I keep getting errors and my coding is off. Can you help me? Write a program called...: Have your program accept the user input in two text fields. When a button ... // these image that put in program must be in the same folder that saved the program Java I/O Examples about the I/O from the command line in Java. Java IO Path... at a time from the input stream. Java IO SequenceInputStream Example... will discuss about the Java IO FileOutputStream. Java ByteArrayOutputStream Example Why PriorityQueue gives such a strange behaviour? Why PriorityQueue gives such a strange behaviour? I am using... should precede higher one. I tried initially this program on my linux system, where...(r2.getId()); return 0; } } And also I am storing this queue Java IO BufferedReader Java IO BufferedReader In this section we will discuss about... from buffer. In this example I have used a BufferedReader with the default size... When you will execute the above example you will get the output as follows Java IO Reader Java IO Reader In this section we will discuss about the Reader class in Java... and how the characters can be read from the stream. In this example I have created... skip(long n) throws IOException Example : An example of Reader class getting an error when set classpath for that jar file but when i compile the program it will compile correctly but when i run it, it gives an error "could not find or load main" please help me...getting an error when set classpath Hello Everyone! i have used hssf help me help me Dear sir/medam i would like to know how to use the java... the order, Fourth button is Find length of text. and there are panels. Please help me. i want to write the code on this program. please share ur idea.   Bubble Sort Program in Java Bubble Sort Program in Java In this tutorial, you will know about the Bubble sort program, which is a simple sorting algorithm, and it works by repeatedly.... In our example we are taking the following array values 12 9 4 99 120 1 i got an error while compile this program manually. i got an error while compile this program manually. import... mapping.findForward("errors.jsp"); } } i set both servlet,struts jar files and i got an error in saveErrors() error Heading cannot find
http://www.roseindia.net/tutorialhelp/comment/395
CC-MAIN-2015-11
refinedweb
2,870
65.83
This document identifies the status of Last Call issues on XQuery 1.0 and XPath 2.0 Formal Semantics as of February 11, 2005. The formal semantics “[FS]”. Last call issues list for xquery-semantics (up to message 2004Mar/0246). There are 90 issue(s). 90 raised (90 substantive), 0 proposed, 0 decided, 0 announced and 0 acknowledged. XQuery 1.0 and XPath 2.0 Formal Semantics W3C Working Draft 20 February 2004 4.1.5 Function Calls Dynamic Evaluation rule 2 (imported function) The last premise is incorrect. It would re-evaluate the whole function call in the imported module's environment. Instead, it should be more like: dynEnv1 |- expanded-QName( Value1', ..., Valuen' ) => Value' -Michael Dyck XQuery 1.0 and XPath 2.0 Formal Semantics W3C Working Draft 20 February 2004 The Formal grammar and the Core grammar both define the following non-terminals: AttributeName ElementName ItemType TypeName Because the inference rules use symbols from both the Formal and Core languages, this would seem to cause an ambiguity. Now, it's true that the Formal and Core definitions of AttributeName are the same (both simply derive QName). Similarly for ElementName. And it's also true that ItemType doesn't occur in the inference rules (though are you certain it won't in a future draft?). But TypeName has different Formal and Core definitions (Formal allows AnonymousTypeNames, Core doesn't), and it does occur in the inference rules, so I think there is an ambiguity there. But regardless of whether there is or isn't a formal ambiguity, there's presumably the opportunity for confusion, which could be removed simply by renaming some symbols. (Note that, in addition to the above symbols, the Formal and *XQuery* languages also both define AttributeValue. However, because the appearance of XQuery constructs in inference rules is fairly limited, I suppose an ambiguity does not arise. But again, it would reduce the chance of confusion if you renamed one of them.) -Michael Dyck Dear FS Editors, Below is a list of some possible issues with FS Feb 2004 Working Draft, with occasional suggestions, ordered by FS sections. Hopefully, it does not come too late to be useful... My apologies if there are too many already known issues and false alarms down there. Sincerely, Vladimir Gapeyev ------------------------------------------------------------------------ [General] Shouldn't normative inference rules have formal identifiers by which they can be referenced, similar to those of grammar productions? [2.5.1 Namespaces, last para] The term "host language" appears here and nowhere else in the spec! Should it just say "language" or "XQuery or XPath"? [3.1, just before 3.2] The statement in this section that $fs:dot, $fs:position and $fs:last are _built-in_ variables in FS is confusing when one considers their use in the normalization rules of Section 4: One could assume that if the variables are built-in they somehow magically contain appropriate values whenever referenced. However, the normalization rules treat them as regular variables, by binding them in for and let expressions whenever their value is changed -- no magic! Suggestion: Perhaps the def of these variables should be moved to Section 4 where it would just say that they are special variables used in normalization rules that are assumed to be distinct from any user variables. They are supposed to mimic the functionality of '.', fn:position, fn:last, and this is achieved by careful formulation of the normalization rules, no magic. In light of this I wonder, if in addition to the rule [ . ] == $fs:dot one also needs the rules [ fn:last() ] == $fs:last [ fn:position() ] == $position ? [3.1, just before 3.2] It says: "Variables with the "fs" namespace prefix are reserved for use in the definition of the Formal Semantics. It is a static error to define a variable in the "fs" namespace." This appears to be at odds with the last para of 2.5.1 saying that entities with fs prefix are abstract, introduced just for the purposes of this spec and are not supposed to be provided in the host language. So, under the 2.5.1 proviso, defining the fs prefix in a user query should lead to absolutely no trouble! [3.1.1 Static Context, last 2 para before 3.1.1.1] These paragraphs define functions fs:active-ns and fs:get_ns_from_items, which perhaps better be done in 6.1. The descriptions of the functions aren't very clear... [fs:active-ns] Perhaps should say "fs:active-ns(statEnv) returns all prefix mappings from statEnv.namespace that are of 'active' kind." [fs:get_ns_from_items]: The expanded-QName and URI appear in the def from nowhere and their role is unclear! It also should say that the return are prefix-to-namespace mappings with kind indication, not just namespaces. [4.2 Path Expressions, just before 4.2.1] It appears that according to the spec the last normalization rule, the one for [ StepExpr1 "/" StepExpr2 ]_Expr is supposed to handle all "seminormalized" path expressions of the form StepExpr1 / StepExpr2 / .... / StepExpr_n. However, RelativePathExpr is defined as [71 (XQuery)] RelativePathExpr ::= StepExpr (("/" | "//") StepExpr)*, i.e. it can contain one or more StepExpr, while the rule in question handles the case of exactly two. Perhaps this can be fixed just by changing StepExpr2 to RelativePathExpr in the rule. [4.2 Path Expressions] It is said in "Core Grammar" section: "The grammar for path expressions in the Core starts with the StepExpr production". According to the productions [52,58,59(Core)] in 4.2.1. it should be rather said "starts with AxisStep production". [4.2.1 Steps -- Dynamic Evaluation, also Static Type Analysis] The eval rule for the judgment dynEnv |- Axis NodeTest => Value refers to the variable $fs:dot. Somehow it does not feel right to make the Core semantics depend on the name of an auxiliary variable, especially if one takes the position (see also comments for [3.1]) that $fs:dot is a usual Core variable, albeit introduced for special purposes during normalization. Suggestion: Extend the Core syntax to contain an explicit step application construct. In more detail: (1) Introduce an expression form Expr / AxisStep (or PrimaryExpr / AxisStep, or even $Var / AxisStep -- if more restrictive syntax is desired). The semantics is that Expr evaluates to a single node (to be checked by the type system) which is an explicit context node for AxisStep. [Of course, if the use of "/" is objectionable, the syntax can be different.] (2) Modify the single-step normalization rules to read: [ ForwardStep ]_Expr == $fs:dot / [ ForwardStep ]_Axis (3) Now, the above evaluation rule would look like dynEnv |- Expr => Value1 <-- the rest of clauses unchanged --> ---------------------------------------------------------------- dynEnv |- Expr / Axis NodeTest => fs:distinct-doc-order(Value3) [4.2.1.1, Axes] The text and normalization rules indicate that preceding/following(-sibling) axes are not on the Core, but the grammar rules [(Core) 60,61] still contain them. (Also in Appendix A.1) [4.2.1.1, Axes] For ForwardAxis, the grammar [91(XQuery)] does not define namespace:: axis, while [60 (Core)] does! Consequently, the normalization rule that follows normalizes from non-existing syntax. [This could be a problem with XQuery, rather than FS spec, since XQ Datamodel does define the kind of namespace nodes. On the other hand, if namespace:: is to be removed from FS, it currently also appears in [7.2.1, Principal Node Kinds] There is also cross-spec-numbering discrepancy: FS [91(XQuery)] is XQuery[89] [4.2.1.1] The normalization rules for sibling axes are given as follows: [following-sibling:: NodeTest]_Axis == [let $e := . in parent::node()/child:: NodeTest [.<<$e]]_Expr [preceding-sibling:: NodeTest]_Axis == [let $e := . in parent::node()/child:: NodeTest [.>>$e]]_Expr I think their bodies should be swapped! E.g., in the 1st, if I get it right, in [.<<$e] predicate, $e refers to the original node and . ranges over all siblings, so the predicate is true for siblings that _precede_ $e. Also, it could make sense to be a notch more explicit by writing [following-sibling:: NodeTest]_Axis == [let $e := . in $e/parent::node()/child:: NodeTest [.>>$e]]_Expr ^^ [4.3.2 Filter Expressions] Subsection "Core Grammar" should be renamed "Normalization". [4.7.3.1. Computed Element Constructors, both Dynamic Evaluation rules] In extended static environments statEnv_1, ... , statEnv_n, there are missing indexes in NCName, should be NCName_1 .... NCName_n [5. Modules and Prologs, Intro] It says: "Namespace declarations and schema imports always precede function definitions, as specified by the following grammar productions." However, the production [33 (XQuery)] Prolog allows to intermix them freely. [5.8 Schema import] In the rule: statEnv |- Definition* =>type statEnv1 statEnv1 |- Definition1 =>type statEnv2 --------------------------------------- Definition1 Definition* =>type statEnv2 --- input statEnv is missing in the conclusion [5.8 Schema Import] (Also see comments for [F] below.) I am afraid the schema importing formalization in [5.8] and [F] is not robust w.r.t. namespace prefix bindings possibly defined in the imported schema. Namely, the import is formalized by the rule [schema String (at String)?]_Schema statEnv |- Definition* =>type statEnv1 ------------------------------------------------------------- statEnv |- import schema String (at String)? =>stat statEnv1 and a representative rule for the second judgment above the line is statEnv |- TypeName of elem/type expands to expanded-QName statEnv1 = statEnv + typeDefn(expanded-QName => define type TypeName TypeDerivation ) ---------------------------------------------------------------------------- statEnv |- define type TypeName TypeDerivation =>type statEnv1 where statEnv maps a resolved expanded-QName of TypeName to a definition containing unresolved TypeName and where TypeDerivation can contain other unresolved Qnames (see current defs in [F]). But, even though TypeName is supposed to reside in the target namespace of the imported schema, statEnv may lack a prefix mapping necessary for resolving it into expanded-QName, since the first rule above does not add it by default! (And even if the version of the import statement is used that binds schema's namespace to a prefix, this prefix can only coincidentally be the same as the one in TypeName.) Moreover, if the schema defined other prefixes (e.g. for namespaces of the imported schemas), they can occur in TypeDerivation, and there is no provision in the current formalization for them to get into statEnv. I can see two possible approaches for cleaning this up: (1) Specify that schema import, in addition to Definitions (with unresolved QNames), also returns a set of prefix-to-namespace bindings that can now be incorporated into statEnv. (2) Specify that the definitions returned by schema import actually contain only resolved QNames. The obvious (killer?) shortcoming of (1) is that it implicitly introduces prefixes that explicitly appear only in the schema and can even shadow earlier prefixes defined by the query programmer. Approach (2) appears to be more sound, although it would require significant changes to the specification, at least: - Definitions productions in [2.3.4] need to be duplicated to similar defs that refer only to resolved QNames. (Although, since those productions describe entities not available in the source language, maybe the modified version is the only one that is needed?) - statEnv needs to be modified to contain definitions with _resolved_ names. [6.1 Formal Semantics Functions] Here is a summary list of fs-prefixed functions that appear throughout the spec but do not have subsections in 6.1, which is perhaps an unintended omission: fs:active-ns, fs:get_ns_from_items, fs:count, fs:is-same-node, fs:node-before, fs:node-after, fs:local-variables, fs:local-functions [7.1.9 Type expansion] (1) The inference rule given here is for the case when type TypeName is defined by extension. There must be another one for s derivation by restriction. (2) The inference rule contains the judgment statEnv |- Type2 is Type1 extended with union interpretation of TypeName where Type1 is defined in the previous judgment to be the extension fragment that TypeName's extension adds to the type BaseTypeName. I believe, however, Type1 should be the concatenation of BaseTypeName's definition and TypeName's extension fragment. Suggestion: It might help to obtain Type1 as the result of the following "derives" judgment: statEnv |- TypeName derives Type which produces the type model for TypeName that composes the effects of all type derivations on the path from the root of the type hierarchy down to TypeName. Rules(still need to be tinkered with to handle Mixed? correctly): statEnv |- TypeName of elem/type expands to expanded-QName statEnv.typeDefn(expanded-QName) => define type TypeName extends BaseTypeName Mixed? { Type0? } statEnv |- BaseTypeName derives BaseType Type = BaseType, Type0 ---------------------------------------------------------------------- statEnv |- TypeName derives Type statEnv |- TypeName of elem/type expands to expanded-QName statEnv.typeDefn(expanded-QName) => define type TypeName restricts BaseTypeName Mixed? { Type0? } statEnv |- BaseTypeName derives BaseType <<? affirm that Type0 is a subtype of BaseType ?>> Type = Type0 ---------------------------------------------------------------------- statEnv |- TypeName derives Type Note that using these two rules would automatically resolve the issue (1) above. [7.1.10 Union interpretation of derived types] The inference rule should contain the judgment to obtain expanded-QName from TypeName0. [7.2.2.2 Dynamic semantics of axes] In most rules, the judgments are written like dynEnv |- axis Axis child:: of NodeValue => Value1 ^^^^ --- "Axis" should be dropped. [A. Normalized core grammar] There seems to be quite a few unreachable non-terminals in the grammar: - QuantifiedExpr [43 (Core)] -- perhaps it should be mentioned in [34 (Core)] for ExprSingle - OrderByClause [39 (Core)] -- Perhaps it should appear in [35 (Core)] production for FLWORExp - PrimaryExpr [53 (Core)] -- perhaps should appear in [51 (Core)] for ValueExpr - ComputedConstructor [57 (Core)] -- perhaps should appear in PrimaryExpr [53 (Core)] [F.2 Schemas as a whole] The rule [Pragma]pragma(targetNCName) == Definition* in [F.2.1] for, presumably, handling Schema's "include" | "import" | "redefine" features, does not make sense: its rhs comes from nowhere! On the other hand, [F.2.2-4] say that handling of "include" | "import" | "redefine" is not specified in this document since it is assumed to be handled by the XML Schema processor. Suggestion: Perhaps [F.2] should just say that the helper function open-schema-document(SchemaName) encapsulates the functionality of a Schema processor, which is assumed to handle "include" | "import" | "redefine" features. I.e., the result of open-schema-document(SchemaName) is described by Content production [(56) Formal]. Then there is no need for [Pragma]_pragma rule, and Schema mapping rules at the end of [F.2.1.] should be [schema SchemaName (at SchemaNamespace)?]_Schema == [open-schema-document(SchemaName, SchemaNamespace)]_definition(targetNCName) [F.2 Schemas as a whole] This section mentions targetURI (that comes from the imported schema) and targetNCName (that parameterizes all the mapping rules), which are supposedly related, but the relationship is nowhere spelled out. Also, in the presence of Schema <import> and relatives, there can be multiple target URIs... Perhaps a good way to tackle both difficulties would be to say that open-schema-document() is also assumed to resolve all QNames defined and referenced in the imported schema. The mapping rules in the rest of the section then refer to the fully resolved names and don't need to be parameterized by targetNCName. [F.7, F.8 Attribute and model group definitions] These sections say that the corresponding features are not handled by the mapping, and refer to Issue 501 (FS-Issue-0158). But the Issues document marks the issue as resolved! This is a resend of the submission I made a couple days ago: I have added a unique identifier for each comment, and fixed a few typos of my own. Also, I forgot to mention that most if not all of these comments are editorial. --------------------- XQuery 1.0 and XPath 2.0 Formal Semantics W3C Working Draft 20 February 2004 Lines beginning with '%' uniquely identify each comment. Lines beginning with '#' are quotes from the spec. Lines beginning with '>' are suggested replacement text. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX TRANS-SECTION COMMENTS % 001 'not' in judgments: Note that there are no inference rules that tell us how to conclude that a judgment involving 'not' holds, so presumably you must explain how to do so. I suspect this will be easier if you change occurrences of # env |- not( something ) to > not( env |- something ) ________ % 002 When a double-quoted symbol appears in the EBNF, the symbol should appear without the quotes when in occurs in an inference rule or mapping rule. For the most part, the spec adheres to this, but it occasionally lapses. In particular, when the following quoted symbols appear in rules, the quotes should probably be removed. "element" "attribute" "lax" "strict" "skip" "/" ________ % 003 # Object in { a, b, ... } The 'in' and braces are meta-syntactic, so they should be bolded. Or prehaps better would be to rewrite it as > Object = a or Object = b or ... (with bold 'or's). ________ % 004 # statEnv |- statEnv.mem(a) ... The "statEnv |-" is redundant, delete it. > statEnv.mem(a) ... ________ % 005 # Type <: Type All '<:' judgments should start with 'statEnv |-' ________ % 006 # Value matches Type All 'matches' judgments should start with 'statEnv |-'. ________ % 007 # VarRef of var expands to Variable Change to: > VarRef = $ VarName > VarName of var expands to Variable ________ % 008 # Variable Not defined. Change to 'expanded-QName'? ________ % 009 # Value Sometimes a Value (or more specifically, a pattern whose name is a symbol deriveable from Value) will appear where an Expr is allowed. This seems kind of sloppy. ________ % 010 # String As a specific case of the preceding, most occurrences of 'String' in the rules should probably be 'StringLiteral'. ________ % 011 # . . . Change to: > ... ________ % 012 # fn:local-name-from-QName Change to: > fn:get-local-name-from-QName ________ % 013 # fn:namespace-uri-from-QName Change to: > fn:get-namespace-uri-from-QName % 014 Also, all uses of the function are of the form: # fn:namespace-uri-from-QName( ... ) = statEnv.namespace(...) but this is malformed: the LHS is a URI, but the RHS is a (namespace-kind, URI) pair. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX SECTION-SPECIFIC COMMENTS Abbreviations: "Sem" = "Semantics" "SCP" = "Static Context Processing" "STA" = "Static Type Analysis" "DCP" = "Dynamic Context Processing" "DEv" = "Dynamic Evaluation" "DErr" = "Dynamic Errors" "LHS" = "left-hand side" (or "above the '=='" in mapping rules) "RHS" = "right-hand side" (or "below the '=='" in mapping rules) To identify a particular premise of an inference rule, I give its position (e.g. "premise 3"). In counting, I skip any premise that is merely an ellipsis. ------------------------------------------------------------------------ 2.1.2 Notations for judgments % 015 # Symbols are purely syntactic and are used to write the judgment # itself. There are actually (at least) two kinds of symbols, which you might call "base-syntactic" and "meta-syntactic". Base-syntactic symbols arise from the EBNF grammar. Meta-syntactic symbols arise from the "judgment declarations" in Notation sections. Because a judgment can combine both kinds of symbols, it's important to be able to distinguish them. Usually, base-syntactic symbols are presented in normal typeface, whereas meta-syntactic symbols are presented in boldface. However, sometimes they aren't. It would be good if the spec were consistent. In particular, add bold to: => |- : (Unless it's the colon in a QName or Wildcard.) = (Except a few cases, e.g. "namespace | "(" Type ( "," Type )+ ")" > | "(" Type ( "|" Type )+ ")" > | "empty" > | "none" ------------------------------------------------------------------------ 2.3.4 Top level definitions % 028 Grammar # [39 (Formal)] ComplexTypeDerivation ::= ... Mixed? # [35 (Formal)] TypeSpecifier ::= Nillable? ... # [44 (Formal)] Definition ::= ... Substitution? Nillable? The symbol '?' is both base-syntactic (to denote a zero-or-one type) and meta-syntactic (to denote an optional [base-syntactic] phrase). It's sometimes difficult to tell which kind each is. They should at least be easily distinguishable. But the meta-syntactic '?' complicates the matching of premises to conclusions (in other rules), so I think things would be even better if meta-syntactic '?' were eliminated. For example, consider the symbol 'Mixed'. In the grammar, and in the inference rules, it always occurs followed by a '?'. So replace occurrences (in the grammar and inference rules) of 'Mixed?' with a new symbol, say 'MixedOption', and replace the production # Mixed ::= "mixed" with > MixedOption ::= "mixed"? or > MixedOption ::= | "mixed" I believe these symbols can be handled this way: Derivation Mixed Nillable PositionalVar Substitution TypeDeclaration When the symbol sometimes occurs with the '?' and sometimes without, you proceed as above, except that the production for the new symbol augments rather than replaces the original production. E.g., replace occurrences of 'ElementName?' with 'ElementNameOption', keep the 'ElementName' production, and add the production > ElementNameOption ::= ElementName? Some symbols that can be handled this way: AttributeName ElementName TypeSpecifier ValidationMode ------------------------------------------------------------------------ 2.4.1 Processing model % 029 # Static analysis is further divided into four sub-phases. Each phase # consumes the result of the previous phase and generates output for # the next phase. ... # Static analysis consists of the following sub-phases # 1. Parsing # 2. Static Context Processing # 3. Normalization # 4. Static type analysis In fact, as section 5 tells us, some Normalization happens as part of SCP, and some as part of STA. And SCP happens as part of STA. So "sub-phases" 2, 3, and 4 are not as assembly-line as you indicate. ------------------------------------------------------------------------ ------------------------------------------------------------------------ 3.1.1 Static Context % 030 statEnv.docType: # corresopnds Change to "corresponds". ________ % 031 statEnv.namespace # The namespace environment maps a namespace prefix (NCName) onto a # namespace kind and a namespace URI (URI) or the empty namespace # (#EMPTY-NAMESPACE). How does a prefix get mapped to #EMPTY-NAMESPACE? ------------------------------------------------------------------------ 3.1.1.1 Resolving QNames to Expanded QNames % 032 para 1 # Element and type QNames may be in the empty namespace, I looked in -- the 'Namespaces in XML' spec, -- the 'XQuery Data Model' spec, and -- the 'XQuery Language' spec, and as far as I can see, none of them support the term "the empty namespace". Moreover, QNames aren't in namespaces, NCNames are. # that is, there is no URI associated with their namespace prefix. I don't think you mean this. If an element or type QName has a namespace prefix, then it will match the first rule in the Semantics section (with the prefix bound to NCName1). If there's no URI associated with the prefix, then statEnv.namespace(NCName1) will fail, and you'll get a static error. If you really wanted the stated behaviour, you'd need this rule: > statEnv.namespace(NCName1) undefined > ------------------------------------- > statEnv |- NCName1:NCName2 of elem/type expands to > (#EMPTY-NAMESPACE,NCName2) But I don't think you want that. Instead, the rules that involve #EMPTY-NAMESPACE appear to be using it to handle names that belong to no namespace. If that's what you mean, then change your terminology. And change '#EMPTY-NAMESPACE' to '#NO-NAMESPACE-URI' or something. ________ % 033 Notation # statEnv |- QName of elem/type expands to expanded-QName Some occurrences of this judgment-form have 'TypeName' in the 'QName' position. But TypeName derives both QName and AnonymousTypeName. In cases where the TypeName is an AnonymousTypeName, it will be able to match the conclusion of any rule. Which means that the '=>type' judgment does not hold for definitions of anon types, which means that schema import doesn't work. Possible fix: Split this judgment-form into two, one for 'elem' and one for 'type', and then in the latter, change 'QName' to 'TypeName'. Then add a rule for 'AnonymousTypeName of type expands to'. ________ % 034 Sem / rule 1,3 / premise 1 # statEnv.namespace(NCName1) = URI-or-EmptyNamespace Section 3.1.1 tells us that statEnv.namespace maps an NCName to a pair consisting of a namespace kind (passive/active) and a namespace URI (or #EMPTY-NAMESPACE). Thus the judgment should be: > statEnv.namespace(NCName1) = (NamespaceKind, URI-or-EmptyNamespace) ________ % 035 Sem / rule 5,7 / premise 1 # statEnv.namespace(NCName1) = URI Ditto above. > statEnv.namespace(NCName1) = (NamespaceKind, URI) ------------------------------------------------------------------------ 3.1.2 % 036 dynEnv.funcDefn # The initial function environment (statEnvDefault.funcDefn) ... Change 'statEnvDefault' to 'dynEnvDefault'. ________ % 037 dynEnv.docValue: # corresopnds Change to "corresponds". ------------------------------------------------------------------------ 3.4.4 SequenceType Matching % 038 Normalization / rule 19, 25 / LHS Each of these rules appears to have a judgment thrown in before the '==' sign. This should presumably be explained, or else notated differently. ------------------------------------------------------------------------ 3.5.2 Handling Dynamic Errors % 039 rule 1 I realize this rule is only supposed to specify the default behaviour, but how do you prevent it from being true in the non-default cases? ________ % 040 rules 2, 3 You're using what appears to be formal notation to convey an informal rule, which is unwise. For any given statEnv, you can always find some binding for 'symbol' and 'component' such that the lookup fails, so the premise always holds, so every expression raises statError and dynError. ------------------------------------------------------------------------ ------------------------------------------------------------------------ 4.1.1 Literals % 041 all rules Occurrences of 'IntegerLiteral', 'DecimalLiteral', 'DoubleLiteral', 'StringLiteral' should be italicized ________ % 042 3rd DEv / rule 1 / conclusion # dynEnv |- DoubleLiteral => xs:double(DoubleLiteral) '=>' should be bold ________ % 043 4th DEv / rule 1 / conclusion # dynEnv |- StringLiteral => xs:string(StringLiteral) '=>' should be bold. ------------------------------------------------------------------------ 4.1.2 Variable References % 044 DEv / rule 1,2 / premise 1 # dynEnv |- VarName of var expands to expanded-QName Change 'dynEnv' to 'statEnv'. ________ % 045 DEv / rule 2 / premise 4 # dynEnv1 |- $ VarName => Value The '1' should be a subscript. ------------------------------------------------------------------------ 4.1.5 Function Calls % 046 Notation / rule 1 / RHS Change 'Expr' to '[ Expr ]_Expr'. (Or you could do it in the Normalization rule, but it's easier here.) ________ % 047 Normalization / rule 2 # QName ( A1, ..., An) ) Delete extra right-paren ________ % 048 STA / rule 1 / premise 2 # statEnv |- Expr1 : Type1 ... Exprn : Typen This is structured as a single premise but should presumably be two plus an ellipsis-premise. ________ % 049 STA / rule 3 / premise 1,2 # not(Typex = (...) Occurrences of 'not' should be in bold. ________ % 050 STA / rule 3 / premise 1,2 # not(Typex = (...) Append a right paren. ________ % 051 STA / rule 3 / premise 7,8 # Type1' can be promoted to Type1'' Prepend 'statEnv |-' ________ % 052 STA+DEv+DErr Occurrences of 'FuncDecl' should be italicized. Also, there should be a Formal EBNF production for FuncDecl. ________ % 053 DEv / rule 1 Several of the premises refer to statEnv, but the conclusion doesn't. (This happens with lots of the DEv rules in the spec.) Theoretically, this would allow the inference engine to fabricate any statEnv that satisfied the premises. But presumably, want the same statEnv that the FunctionCall "received" during STA. This needs to be explained, and possibly denoted somehow. ________ % 054 DEv / rule 1,2,3 / premise 4 DErr / rule 3,4 / premise 4 # dynEnv |- Expr1 => Value1 ... dynEnv |- Exprn => Valuen This is structured as a single premise but should presumably be two plus an ellipsis-premise. ________ % 055 DEv / rule 1 / premise 8 # dynEnvDefault = ( ... ) ] |- Change to > dynEnvDefault + varValue( ...) |- In addition to the obvious changes, note the deletion of right-bracket. ________ % 056 DErr / rule 2 / premise 3 # FuncDeclj = define function expanded-QName(Type1, ..., Typen) as Type # for all 1 <= j <= m This appears to require that all signatures for a given func name be identical. Put j subscripts on the 'Type' patterns. ________ % 057 DErr / rule 3 / premise 9 # dynEnv [ varValue = (...) ] |- Change to > dynEnv + varValue(...) |- ------------------------------------------------------------------------ 4.2.1 Steps % 058 STA / rule 1 / premise 2 # Type1 <: node DEv / rule 1 / premise 2 # Value1 matches node 'node' does not appear to be a valid Type. If you meant 'node()', that's still not a valid (Formal) Type, though it is a valid (XQuery) ItemType. ________ % 059 STA / rule 1 / premise 3, 4, conclusion DEv / rule 1 / premise 3, 4, conclusion Occurrences of 'Axis' should be italicized. ________ % 060 STA / rule 1 / premise 4, 5 DEv / rule 1 / premise 4, 5 Occurrences of 'PrincipalNodeKind' should be italicized. ________ % 061 DErr / rule 1 / conclusion # dynEnv.varValue |- ... Delete '.varValue'. ------------------------------------------------------------------------ 4.2.1.2 Node Tests % 062 Grammar # [95 (XQuery)] Wildcard ::= "*" | (NCName ":" "*") | ("*" ":" NCName) # [64 (Core)] Wildcard ::= Change occurrences of 'NCName' to 'Prefix' and 'LocalPart' respectively, or 'Wildcard' won't match patterns that use those names. ------------------------------------------------------------------------ 4.3.1 Constructing Sequences % 063 Normalization, STA, DEv Change occurrences of 'Expr' to 'ExprSingle'. ------------------------------------------------------------------------ 4.6 Logical Expressions % 064 Normalization, STA, DEv, DErr Change occurrences of 'Expr' to 'AndExpr' or 'ComparisonExpr' as appropriate. ------------------------------------------------------------------------ 4.7.1 Direct Element Constructors % 065 Grammar # [26 (XQuery)] ElementContentChar ::= Char - [{}<&] - [{}<&] # [27 (XQuery)] QuotAttContentChar ::= Char - ["{}<&] - ["{}<&] # [28 (XQuery)] AposAttContentChar ::= Char - ['{}<&] - ['{}<&] In each case, eliminate the repetition. ------------------------------------------------------------------------ 4.7.3.1 Computed Element Constructors % 066 Often, this section does not recognize that the 'content expression' of a CompElemConstructor is not an Expr but a CompElemBody. So anything of the form: # element QName { Expr } should be changed to > element QName { CompElemBody } and the changes propagated (i.e., the rules made to handle CompElemNamespaces). ________ % 067 STA / rules 2,3,4 / premise 1 # statEnv |- QName in context ... This judgment matches both # statEnv |- ElementName? in context ... declared in 7.6.2, and # statEnv |- AttributeName? in context ... declared in 7.6.3. Is this intentional? Maybe the judgment-forms should be distinct (add an 'elem' or 'attr' keyword in bold). ________ % 068 STA / rule 2 / premise 5 # ValidationContext1 = statEnv.validationContext "/" QName The slash should not be in quotes. Even so, I can't parse the premise, because ValidationContext/QName doesn't fit the EBNF for ValidationContext. ________ % 069 DEv / rule 1 / premise 1, conclusion # Expr = CompElemNamespace1, ..., CompElemNamespacen, (Expr0) DEv / rule 2 / premise 3 # Expr2 = CompElemNamespace1, ..., CompElemNamespacen, (Expr3) The equation is invalid; an Expr cannot match the RHS. The LHS should be a CompElemBody. ________ % 070 DEv / rule 1 / premise 2,3 DEv / rule 2 / premise 4,5 # CompElemNamespace = namespace NCName { URI } The EBNF for a CompElemNamespace says that the NCName can be omitted, but these jusgments don't allow for that. ________ % 071 DEv / rule 1 / premise 6 # statEnvn, dynEnv |- ... It's not clear what the notation means. ________ % 072 DEv / rule 1 / premise 8 DEv / rule 2 / premise 10 # NamespaceAnnotations = (CompElemNamespace1, ... CompElemNamespacen, Delete parens, or else change the EBNF for NamespaceAnnotations to require/allow parens. Also, put a comma after the ellipsis ________ % 073 DEv / rule 1,2 / last premise # element QName of type xs:anyType { Value0 } { NSAnnotations } # element { Value0 } of type xs:anyType { Value1 } { NSAnnotations } These are meant to be ElementValues, but: (a) the context allows an Expr, not an ElementValue, and (b) (for rule 2) the element-name must be a QName, not computed. ________ % 074 DEv / rule 1,2 / conclusion # statEnv dynEnv |- Insert comma, presumably. ________ % 075 DEv / rule 2 / premise 8 # fs:item-sequence-to-node-sequence (Expr3); => Value Delete semicolon. ------------------------------------------------------------------------ 4.7.3.2 Computed Attribute Constructors % 076 STA / rules 2, 3 / premise 1 # statEnv |- QName in context ... As in 4.7.3.1, this matches both judgment-forms # statEnv |- ElementName in context ... # statEnv |- AttributeName in context ... ________ % 077 DEv / rule 1 / conclusion # attribute expanded-QName of type xdt:untypedAtomic { Value } EBNF for AttributeValue says QName, not expanded-QName, and SimpleValue, not Value. ________ % 078 DEv / rule 2 Change 'Expr' to 'Expr1' or 'Expr2' as appropriate. ________ % 079 DEv / rule 2 / conclusion # attribute { Value0 } of type xdt:untypedAtomic { Value } Where you have "{ Value0 }", AttributeValue only allows QName. ________ % 080 DErr / rule 3 / premise 1 # statEnv.statEnv Delete 'statEnv.' ------------------------------------------------------------------------ 4.7.3.3 Document Node Constructors % 081 DEv, DErr / all rules # dynEnv |- Value matches Type Change 'dynEnv' to 'statEnv', according to 7.3.1. ------------------------------------------------------------------------ 4.7.3.4 TextNodesConstructors % 082 DEv, DErr / all rules # dynEnv |- Value matches Type Change 'dynEnv' to 'statEnv', according to 7.3.1. ------------------------------------------------------------------------ 4.7.3.5 Computed Processing Instruction Constructors % 083 DEv, DErr / all rules # dynEnv |- Value matches Type Change 'dynEnv' to 'statEnv', according to 7.3.1. ------------------------------------------------------------------------ 4.7.3.6 Computed Comment Constructors % 084 DEv, DErr / all rules # dynEnv |- Value matches Type Change 'dynEnv' to 'statEnv', according to 7.3.1. ------------------------------------------------------------------------ 4.8 [For/FLWR] expressions % 085 In rules throughout 4.8.x, change 'Expr' to 'ExprSingle' as appropriate, to conform to the EBNF. ------------------------------------------------------------------------ 4.8.2 For expression % 086 STA / all rules # ... varType(VarRef : Type) ... According to Section 3.1.1, the domain of statEnv.varType is expanded-QName, but a VarRef is not an expanded-QName. You'll need to add some stuff: > VarRef = "$" VarName > VarName of var expands to expanded-QName > ... varType( expanded-QName : Type ) ________ % 087 STA / rule 2 / premise 2 STA / rule 4 / premise 4 # statEnv + varType(VarRef1 : T, VarRefpos : xs:integer) Change the comma to a semicolon. ________ % 088 STA / rule 3,4 / premise 3 # prime(Type1) <: Type0 Prepend 'statEnv |-'. ________ % 089 STA / rule 3 / premise 4 # statEnv + varType(VarRef1 : Type0)) |- ... Delete extra right paren. ________ % 090 DEv / rule 3 / premise 4,5 DEv / rule 5 / premise 6,8 # varValue(Variable => Itemn, Variablepos => n) Change the comma to a semicolon ________ % 091 DEv / rule 4 / conclusion # => gr_Value1; ,..., Valuen Change 'gr_Value1;' to italicized Value sub 1. ________ % 092 DErr / rule 1 / conclusion # for Variable1 Change to > for VarRef ________ % 093 DErr / rule 3 / premise 4 # Variable => ItemiVariablepos => i Insert semicolon and space: > Variable => Itemi; Variablepos => i ------------------------------------------------------------------------ 4.9 Unordered Expressions % 094 Notation # dynEnv |- Value1 permutes to Value2 Should be centered. ------------------------------------------------------------------------ 4.10 Conditional Expressions % 095 Throughout, change occurrences of 'Expr2' & 'Expr3' to 'ExprSingle2' & 'ExprSingle3', to conform to the EBNF. ________ % 096 DEv+DErr / all rules / conclusions # dynEnv |- if Expr1 then Expr2 else Expr3 ... Add parens around Expr1. ________ % 097 DErr / rule 3 / premise 2 # dynEnv |- Expr3 => Error Change to: > dynEnv |- Expr3 raises Error ------------------------------------------------------------------------ 4.11 Quantified Expressions % 098 Why are Quantified Expressions in the Core? Couldn't they be normalized into For expressions? % 099 In rules throughout this section, change 'Expr' to 'ExprSingle' as appropriate, to conform to the EBNF. % 100 Also, in each rule, put a linebreak after the 'of var expands to' premise. ________ % 101 DEv + DErr / most rules / premise 1 # dynEnv |- Expr1 => Item1 ... Item Add commas around ellipsis. ________ % 102 DEv + DErr / most rules # 1 <= i <= n Put this premise next to a premise that uses 'i'. ________ % 103 DEv / all rules # dynEnv(Variable1 => Itemx)) Not only does this have an extra right paren, but it also treats dynEnv as a mapping. Change to > dynEnv + varValue(Variable1 => Itemx) ________ % 104 DEv / rule 3 / premise 5 # dynEnv(VarRef1 => Itemn)) (Similar to above, but with a VarRef.) Change to > dynEnv + varValue(Variable1 => Itemn) ________ % 105 DEv / rule 4 / premise 6 # statEnv |- VarRef1 of var expands to Variable1 This repeats premise 3. Delete it ________ % 106 DErr / rule 1 / conclusion # TypeDeclaration ? Delete space before '?'. ------------------------------------------------------------------------ 4.12.2 Typeswitch % 107 2nd notation # statEnv |- Type1 case CaseClause : Type # dynEnv |- Value1 against CaseRules => Value Should be centered ________ % 108 STA / rule 1 / premise 4 # statEnv |- Type0 case default VarRefn+1 return Exprn : Typen+1 STA / rule 3 / conclusion # Type0 case default VarRef return Expr : Type A 'default' clause is not a CaseClause, which is all that the 'case' judgement is declared to handle. ________ % 109 STA / rule 2 / premise 2 STA / rule 3 / premise 1 # statEnv( VarRef : Type ) ... Change to > VarRef = $ VarName > VarName of var expands to expanded-QName (or Variable) > statEnv + varType( expanded-QName : Type ) ... ________ % 110 STA / rules 2+3 / conclusion Prepend "statEnv |-". ________ % 111 DEv / rule 1 / conclusion # dynEnv |- typeswitch (Expr) CaseRules => Value1 The symbol 'CaseRules' does not exist in the XQuery or Core grammar, only in the Formal. (Maybe the Core grammar should use the CaseRules syntax.) ________ % 112 DEv / rule 2 / conclusion # case VarRef SequenceType Insert 'as': > case VarRef as SequenceType ________ % 113 DEv / rule 3 / conclusion # case SequenceType VarRef Change to: > case VarRef as SequenceType ------------------------------------------------------------------------ 4.12.3 Cast % 114 Notation # AtomicType1 cast allowed AtomicType2 = { Y, M, N } Prepend 'statEnv |-'. % 115 And instead of putting "{ Y, M, N }" in the judgment, introduce a Formal non-terminal (e.g. Castability): > [xx (Formal)] Castability ::= Y | M | N ________ % 116 Notation / rule 1 / premise 3, conclusion # ... = X, where X in { Y, M, N } # ... = X Change to > ... = Castability ________ % 117 Notation # Type2 ( Value1 ) casts to Value2 Prepend "dynEnv |-" ________ % 118 STA, DE, DErr / all rules # ... cast allowed ... Prepend 'statEnv |-' ________ % 119 DEv / rule 1 / premise 3 # ( Value1 ) cast as AtomicType2 => Value2 Prepend "dynEnv |-". ________ % 120 DEv / rule 1 / conclusion # AtomicType ( Expr ) casts to Value2 DEv / rule 2 / premise 3 # AtomicType2 ( Value1 ) casts to Value2 AtomicType is not actually a Type (i.e., not deriveable from symbol Type), so these judgments don't match the judgment-form declared in the Notation section. Change to QName? ________ % 121 DErr / rule 1 / premise 1 # AtomicValue1 of type AtomicTypeName Change 'AtomicValue' to 'AtomicValueContent'. ------------------------------------------------------------------------ 4.12.4 Castable % 122 throughout # Expr castable as AtomicType Change 'Expr' to 'CastExpr'. ________ % 123 Normalization / rule 2 / LHS # [Expr castable as AtomicType]_Expr Presumably, AtomicType should be followed by a '?', otherwise it's the same LHS as rule 1. ________ % 124 DEv / rule 1 / premise 2 # ( Value1 ) cast as AtomicType=> Value2 Prepend 'dynEnv |-'. ________ % 125 DEv / rule 2 / premise 1 # ( Expr1 ) cast as AtomicType2 raises dynError Prepend 'dynEnv |-'. ------------------------------------------------------------------------ 4.13 Validate Expressions: % 126 Normalization / rules 1,2 / LHS Each is missing [ ]_Expr around LHS ________ % 127 STA / rule 1 / premise 2 DEv / rule 1 / premise 2 DEv / rule 2 / premise 2 # statEnv(validationMode(ValidationMode) + # validationContext(ValidationContext)) This syntax is not supported by 2.1.4. Change to: > statEnv + validationMode(ValidationMode) > + validationContext(ValidationContext) ________ % 128 STA / rule 1 / premise 5 # prime(Type) = ElementType1 ... ElementType2 Put choice bars around ellipsis. ________ % 129 STA / rule 1 / last premise # Type1 = ElementName1 | ... | ElementNamen 'ElementName' is not a valid Type. Did you mean ElementType instead? ________ % 130 DEv / rules 1,2 / premise 5 # ElementValue2 = ElementName2 ... Insert 'element' > ElementValue2 = element ElementName2 ... Also, in rule 1, delete the semicolon at the end of the line. ________ % 131 DEv / rules 1,2 / premise 7 # annotate as ... Prepend "statEnv |-" or "statEnv1 |-" (not sure which). ------------------------------------------------------------------------ ------------------------------------------------------------------------ 5 Modules and Prologs % 132 Notation # [81 (Formal)] PrologDeclList ::= (PrologDecl Separator)* The rules in SCP and DCP assume that PrologDeclList is left-recursive: > PrologDeclList ::= () | PrologDeclList PrologDecl Separator but the rules in 5.2's SCP and DCP assume that it's right-recursive: > PrologDeclList ::= () | PrologDecl Separator PrologDeclList Since section 5,2 needs to construct a new PrologDeclList by prepending a PrologDecl to an existing PrologDeclList, I think it wins. So maybe the left-recursive rules should be changed. E.g.: > > SCP: > -------------------------------------------------- > statEnv |- () =>stat statEnv > > PrologDecl1 = [PrologDecl]_PrologDecl > statEnv |- PrologDecl1 =>stat statEnv1 > statEnv1 |- PrologDeclList =>stat statEnv2 ; PrologDeclList1 > -------------------------------------------------- > statEnv |- PrologDecl ; PrologDeclList =>stat statEnv2; > PrologDecl1 ; PrologDeclList1 > > STA: > statEnvDefault |- PrologDeclList =>stat statEnv ; PrologDeclList1 > statEnv |- [QueryBody]_Expr : Type > -------------------------------------------------- > PrologDeclList QueryBody : Type > > DCP: > -------------------------------------------------- > dynEnv |- () =>dyn dynEnv > > dynEnv |- PrologDecl =>dyn dynEnv1 > dynEnv1 |- PrologDeclList =>dyn dynEnv2 > -------------------------------------------------- > dynEnv |- PrologDecl ; PrologDeclList =>dyn dynEnv2 > > DEv: > dynEnvDefault |- PrologDeclList1 =>dyn dynEnv > dynEnv |- [QueryBody]_Expr => Value > -------------------------------------------------- > PrologDeclList QueryBody => Value ________ % 133 Notation # [82 (Formal)] PrologDecl You forgot FunctionDecl! ________ % 134 Notation / normalization # [PrologDecl]_PrologDecl == PrologDecl Use subscripts to distinguish the two PrologDecls, otherwise it looks like the []_PrologDecl function is the identity function. ________ Notation / judgment-form 1 # PrologDeclList =>stat statEnv; PrologDeclList1 % 135 (1) The use of a meta-syntactic semicolon is probably a poor choice, especially when base-syntactic semicolons are nearby. How about a bolded word like "with"? % 136 (2) It isn't clear what the resulting PrologDeclList1 is for. % 137 (3) There isn't a corresponding judgment-form declared for =>dyn: PrologDeclList =>dyn dynEnv ________ % 138 Notation / judgment-form 3 # dynEnv |- PrologDecl =>stat dynEnv Change '=>stat' to '=>dyn'. ________ % 139 SCP / rule 2 # PrologDecl1 = [PrologDecl]_PrologDecl When you have a normalization-invocation in an inference rule, you should perhaps make the judgment look more like the "longhand" judgment shown in 2.4.2 / Notation: > statEnv |- [PrologDecl]_PrologDecl == PrologDecl1 ________ % 140 SCP / rule 1 / conclusion DCP / rule 1 / conclusion # () =>stat statEnvDefault; () # () =>dyn dynEnvDefault These use '()' to denote a (syntactically) empty PrologDeclList. This is prehaps not a good idea, since there is possible confusion with '()' denoting a (semantically) empty sequence in the base language. In other rules, empty syntax is denoted by the empty string. See, e.g., 7.6.2 / Semantics / rule 1 / conclusion, where an an omitted ElementName in an 'ElementName? in context' judgment results in the judgment # statEnv |- in context global ... ________ % 141 SCP / rule 2 / premise 3 # statEnv1 |- PrologDecl1 =>stat statEnv2 ; PrologDecl1 Delete '; PrologDecl1'. When applied to a single PrologDecl, '=>stat' just produces a statEnv. ________ % 142 STA / rule 1 / premise 2 # statEnv |- [QueryBody]_Expr : Type Maybe split into > statEnv |- [QueryBody]_Expr == Expr2 > statEnv |- Expr2 : Type ________ % 143 DEv / rule 1 / premise 1 # PrologDeclList1 =>dyn dynEnv Presumably, the subscript 1 refers to the normalized prolog that =>stat "returned" during STA of the module. But as far as the notation is concerned, it just looks like PrologDeclList1 is "free". ________ % 144 DEv / rule 1 / premise 2. # dynEnv |- [QueryBody]_Expr => Value Maybe split into > statEnv |- [QueryBody]_Expr == Expr2 > dynEnv |- Expr2 => Value ------------------------------------------------------------------------ 5.2 Module Declaration % 145 Notation # URI =>module_statEnv statEnv # URI =>module_dynEnv dynEnv It seems to me that these mappings should be components of the static and dynamic environments, respectively. ________ % 146 SCP+DCP / rule 1 / premise 1 # (declare namespace NCName = String PrologDeclList) =>stat statEnv Delete the parens around the namespace decl, and insert a semicolon between the String and PrologDeclList: > declare namespace NCName = String ; PrologDeclList =>stat statEnv ________ % 147 SCP+DCP / rule 1 / premise 2 # module namespace NCName = String PrologDeclList Insert a semicolon again. But even, it's not a judgment ________ % 148 DCP / rule 1 / premise 1 # ... =>stat dynEnv Change '=>stat' to '=>dyn'. ------------------------------------------------------------------------ 5.4 Default Collation Declaration % 149 SCP / rule 1 / premise # statEnv1 = statEnv + collations( String) Change 'collations' to 'defaultCollation'. However, Section 3.1.1 tells us that statEnv.defaultCollation is a pair of functions, not a String. So first, look up the collation's URI in the static environment: > func-pair = statEnv.collations(String) > statEnv1 = statEnv + defaultCollation(func-pair) ________ % 150 DCP / rule 1 / conclusion # dynEnv |- default collation String =>dyn dynEnv Insert "declare" before "default". ------------------------------------------------------------------------ 5.7 Default Namespace Declaration % 151 SCP / rules 1, 2, 3 / conclusion # statEnv |- declare default element namespace = String =>stat statEnv1 Delete '=' after 'namespace'. ------------------------------------------------------------------------ 5.8 Schema Import % 152 Notation / judgment-form 1 # statEnv1 |- Definition* =>type statEnv2 The meta-syntactic '*' is used to denote a sequence of Definitions, but this overloads '*' unnecessarily. Instead, a new symbol 'Definitions' would be less confusion-prone. > [xx (Formal)] Definitions ::= Definition* (And propagate to F.1.3 Main mapping rules and F.2.1 Schema.) ________ % 153 SCP / rules 1,2,3 / premise 1 # ... schema String (at String)? ... Without subscripts, you're forcing the two String's to bind to the same object, which you don't want. Moreover, the schema location hint can be more involved than just "(at String)?". To take care of both of these problems, define > [xx (XQuery+Core)] ImportLocationHints ::= > (("at" StringLiteral) ("," StringLiteral)*)? use that in [149 (XQuery)] and [108 (Core)], and in the inference rules, change every occurrence of # (at String)? to > ImportLocationHints ________ % 154 SCP / rule 3: # default_elem_namespace( String (at String)?) Delete "(at String)?", as default_elem_namespace doesn't care about it. ________ % 155 SCP / rule 5 / conclusion # Definition1 Definition* =>type statEnv2 Prepend "statEnv |-". ------------------------------------------------------------------------ 5.9 Module Import % 156 SCP / rule 1 / premise 1 # String =>module_statEnv statEnv2 Change 'String' to 'String1', I think ________ % 157 SCP / rule 1 / premise 2 # statEnv3 = statEnv1(fs:local-variables(statEnv2, String1) # + fs:local-functions(statEnv2, String1)) This is vague, ad hoc syntax. The following is still ad hoc, but at least is more specific (re varType + funcDefn) and fits better with the syntax established by 2.1.4: > statEnv3 = statEnv1 + varType( fs:local-variables(a,b) ) > + funcDefn( fs:local-functions(a,b) ) ________ % 158 DCP / rule 1 / premise 1 # String =>module_dynEnv dynEnv2 Delete it. ________ % 159 DCP / rule 1 / premise 2 # dynEnv2 = dynEnv1 + varValue(expanded-QName => #IMPORTED(URI)) Add subscript 1 to 'expanded-QName'. ________ % 160 DCP / rule 1 / premise 3, conclusion # (expanded-QName2, Type2) ??? (expanded-QNamen, Typen) Put commas around ellipsis. ________ % 161 DCP / rule 2 / premise 1 # String =>module_dynEnv dynEnv2 Delete it. ________ % 162 DCP / rule 2 / premise 2 # dynEnv + funcDefn1(...) The subscript 1 is in the wrong place. Change to > dynEnv1 + funcDefn(...) ________ % 163 DCP / rule 2 / premise 2,3, conclusion # expanded-QName1(Type1,1, ..., Type1,n) # expanded-QName2(Type2,1, ..., Type2,n) # expanded-QNamek(Typek,1, ..., Typek,n) # expanded-QName1(Type1,1, ..., Type1,n) # expanded-QNamek(Typek,1, ..., Typek,n) Note that this appears to require that all functions imported from a module have the same number of arguments (n). Moreover, with all these subscripts and ellipses, the rule is hard to follow. To fix both of these problems, define a Formal symbol for function signatures: > [xx (Formal)] FuncSignature ::= > expanded-QName "(" ( Type ("," Type)* )? ")" Then you can say: > dynEnv2 = dynEnv1 + funcDefn( FuncSignature1 => #IMPORTED(URI) ) > dynEnv2 ; URI |- FuncSignature2 ??? FuncSignaturek > =>import_functions dynEnv3 > ------------------------------------------------ > dynEnv1 ; URI |- FuncSignature1 ??? FuncSignaturek > =>import_functions dynEnv3 ________ % 164 DCP / rule 2 / premise 3, conclusion Put commas around the (top-level) ellipsis. ________ % 165 DCP / rule 3 / premise 1 # String =>module_dynEnv dynEnv2 Change 'String' to 'String1', ________ % 166 DCP / rule 3 / premise 2,3 # dynEnv1 |- # dynEnv3 |- Change to > dynEnv1 ; String1 |- > dynEnv3 ; String1 |- to match the conclusion of rule 2. ________ % 167 DCP / rule 3 / conclusion # statEnv1 |- import module ... =>dyn statEnv4 Change each 'statEnv' to 'dynEnv'. ------------------------------------------------------------------------ 5.10 Namespace % 168 DCP / rule 1 / conclusion # declare namespace NCName =>dyn dynEnv Insert '= String' after 'NCName'. ------------------------------------------------------------------------ 5.11 Variable Declaration % 169 SCP / all rules / last premise # varType( Variable => Type ) Change '=>' to ':' if you want to follow 2.1.4. ________ % 170 DCP / all rules / conclusion # =>stat dynEnv Change '=>stat' to '=>dyn'. ------------------------------------------------------------------------ 5.12 Function Declaration % 171 Normalization + SCP + STA + DCP # define function QName should be > declare function QName ________ % 172 Normalization / rule 3 / LHS Add 'PrologDecl' subscript to right bracket. ________ % 173 SCP / para 1 # withtin Change to 'within'. ________ % 174 SCP / rule 1 / premise 2 + conclusion STA / rule 1 / conclusion STA / rule 2 / conclusion DCP / rule 1 / premise 4 + conclusion DCP / rule 2 / conclusion # [SequenceType1]sequencetype, ??? [SequenceTypen]sequencetype Put comma after ellipsis. ________ % 175 SCP / rule 1 / premise 2 # funcType(expanded-QName => ( [SequenceType1]sequencetype, ??? funcType is supposed to map an expanded-QName to a set of FuncDecls, but this maps it to a sequence of Types. ________ % 176 SCP / rule 1 / premise 3 # statEnv2 = statEnv + funcType1(expanded-QName) Seems to be a leftover. Delete it. In the conclusion, change 'statEnv2' to 'statEnv1'. ________ % 177 SCP / rules 1 / conclusion Prepend 'statEnv |-'. ________ % 178 STA / rule 1 / premise 1 # ... varType( VarRef : T; ... The domain of statEnv.varType is exapnded-QName, not VarRef, so: > VarRef = $ VarName > VarName of var expands to expanded-QName > ... varType( expanded-QName : T; ... ________ % 179 STA / rule 1+2 / conclusion # statEnv |- define function QName ... This doesn't appear to be a judgment. ________ % 180 DCP / rule 1 / premise 4 + conclusion # dynEnv' This is the only place where a prime is appended to an environment name. For greater consistency, use a subscript '1' instead. ________ % 181 DCP / rule 1 / premise 4 # funcDefn(expanded-QName => ... ) The domain of dynEnv.funcDefn is not just an expanded-QName, but a whole FuncSignature (expanded-QName and sequence of (parameter) Types). ________ % 182 DCP / rule 2 / conclusion # Variable1 as SequenceType1 # Variablen as SequenceTypen Change 'Variable' to 'VarRef'. ------------------------------------------------------------------------ ------------------------------------------------------------------------ 6.1.3 The fs:convert-operand function % 183 STA / rule 3,4,5 / premise 2 # statEnv |- Expr1 <: Type1 Change '<:' to ':'. ------------------------------------------------------------------------ 6.2.1 The fn:abs, fn:ceiling, fn:floor, fn:round, and fn:round-half-to-even functions % 184 STA / rule 1 / premise 3 # convert_untypedAtomic(....) can be promoted to Type1 Maybe change to > Type2 = convert_untypedAtomic(....) > statEnv |- Type2 can be promoted to Type1 ________ % 185 STA / rule 1 / premise 4 # xs:integer xs:decimal Insert comma after 'xs:integer'. ------------------------------------------------------------------------ 6.2.2 The fn:collection and fn:doc functions % 186 STA / rule 2,5 / premise 1 # statEnv |- statEnv.map(String) not defined Change 'not defined' to 'undefined' for consistency. Or change occurrences of 'undefined' to 'not defined'. ________ % 187 STA / rule 2,3 / conclusion # statEnv |- fn:collection(Expr) : node * 'node *' does not appear to be a valid Type. ________ % 188 STA / rule 3,6 # statEnv |- not(Expr = String) This is an attempt to express "Expr is not a literal string". However, note that Expr = String doesn't mean 'Expr' is a literal string but rather 'Expr' and 'String' are instantiated to the same object Because 'String' has no other reference in the rule, the inference engine is free to instantiate it to any object. In particular, it can *always* instantiate it to something different from 'Expr', causing the premise to hold, and thus the conclusion to hold (even when it shouldn't). I don't think you've defined the notation that would express this correctly. ________ % 189 SAT / rule 4 / premises 1,2 # statEnv |- statEnv.docType(String) = Type # statEnv |- statEnv.docType(String) = Type The two premises are the same. Delete one of them. ------------------------------------------------------------------------ 6.2.3 The fn:data function % 190 Notation / judgement-form 1 # statEnv |- data on Type1 : Type2 Put the colon in bold. ________ % 191 STA / rule 8 / premise 1 STA / rule 9 / premise 1,2 STA / rule 11 / premise 1 # statEnv |- ... type lookup (of type TypeName) The parens are ungrammatical. Note that 7.1.9 / Sem / rule 2 / conclusion E.1.1 / Sem / rule 2 / conclusion don't have them. ________ % 192 STA / rule 11 / premise 3 # define type TypeName Derivation? Mixed { Type1? } Change 'Mixed' to just 'mixed'. ------------------------------------------------------------------------ 6.2.5 The fn:error function % 193 STA / rule 1 / premise 1 # statEnv |- Expr : item ? "item?" does not appear to be a valid Type. % 194 Anyway, if fn:error() always has the 'none' type, why does the rule need a premise? ------------------------------------------------------------------------ 6.2.6 The fn:min, fn:max, fn:avg, and fn:sum functions % 195 Sem / rule 1,2,3 / premise 3 # Type1 can be promoted to T Prepend 'statEnv |-'. ------------------------------------------------------------------------ 6.2.7 The fn:remove function % 196 STA / rule 1 / conclusion # fn:remove(Expr,Expr1) : : prime(Type) ? quantifier(Type) Change ': :' to just ':'. ------------------------------------------------------------------------ 6.2.8 The fn:reverse function % 197 STA / rule 1 / conclusion # fn:reverse(Expr) : : prime(Type) ? quantifier(Type) Change ': :' to just ':'. ------------------------------------------------------------------------ 6.2.10 The op:union, op:intersect, and op:except operators % 198 STA / rule 2 / conclusion # prime(Type1, Type2); ? quantifier(Type1,Type2); ? ? Delete two semicolons. ------------------------------------------------------------------------ ------------------------------------------------------------------------ 7.1.3 Element and attribute name lookup (Dynamic) % 199 2nd Sem / rule 1 / premise 1 3rd Sem / rule 1,2 / premise 1 # ... statEnv.attrDecl(AttributeName) ... The domain of statEnv.attrDecl is expanded-QName, so change to: > statEnv |- AttributeName of attr expands to expanded-QName > ... statEnv.attrDecl(expanded-QName) .. ------------------------------------------------------------------------ 7.1.4 Element and attribute type lookup (Static) % 200 1st Sem / rule 2 / conclusion # statEnv |- element ElementName type lookup Nillable? xdt:untyped Insert 'of type' before xdt ________ % 201 2nd Sem / rule 1,2 / premise 1 # ... statEnv.attrDecl(AttributeName) ... The domain of statEnv.attrDecl is expanded-QName, so change to: > statEnv |- AttributeName of attr expands to expanded-QName > ... statEnv.attrDecl(expanded-QName) .. ________ % 202 2nd Sem / rule 2 / conclusion # statEnv |- attribute AttributeName type lookup xdt:untypedAtomic Insert 'of type' before xdt ------------------------------------------------------------------------ 7.1.5 Extension % 203 Sem / rule 1 / premise 1,2 # Type1 = AttributeAll1 , ElementContent1 What does it mean? 'AttributeAll' doesn't match any symbol name, and ElementContent isn't in the Formal language. Is 'ElementContent' supposed to be 'ElementContentType'? And similarly in 7.1.6 / Sem / rule 1 / premise 1. ------------------------------------------------------------------------ 7.1.6 Mixed Content % 204 Sem / rule 1 / conclusion # ( ElementContent & text* | xdt:anyAtomicType *) This relies on the relative precedence of type-operators '&' and '|', which has not been defined, and probably shouldn't be. Just use parens. ------------------------------------------------------------------------ 7.1.7 Type adjustment % 205 Sem / rule 1 / premise 3 # statEnv |- Type3 & processing-instruction* & comment* is Type4 What kind of judgment is this? ------------------------------------------------------------------------ 7.1.9 Type Expansion % 206 Notation / judgment-form 1 # statEnv |- Nillable? TypeReference expands to Type Given the use of this judgment in 7.2.3.1.2, it would be better to change 'Nillable? TypeReference' to 'TypeSpecifier'. ------------------------------------------------------------------------ 7.2.2.1 Static semantics of axes % 207 Sem / rule 8 / premise 2 Sem / rule 17 / premise 2 # Nillable? TypeReference expands to ... Prepend 'statEnv |-'. ________ % 208 Sem / rule 8 / premise 3, 4 # Type1 has-node-content Type1' Prepend 'statEnv |-'. ________ % 209 Sem / rule 16 / conclusion # processing-instructions* Delete final 's'. ________ % 210 Sem / rule 17 / premise 3, 4 # Type1 has-attribute-content Type1' Prepend 'statEnv |-'. ________ % 211 Sem / rule 23 / conclusion # document { Type } Italicize 'Type'. ------------------------------------------------------------------------ 7.2.2.2 Dynamic semantics of axes % 212 Sem / rules 3-10 # axis Axis self:: of NodeValue # axis Axis child:: of element ... # axis Axis attribute:: of ElementName ... # etc In each case, delete "Axis" before the actual axis name. ________ % 213 Sem / rule 5 / conclusion # dynEnv |- axis Axis attribute:: of ElementName ... Insert 'element' before 'ElementName'. ________ % 214 Sem / rule 11 # In all the other cases, the axis application results in an empty # sequence. # dynEnv |- axis Axis of NodeValue => () otherwise. The premises are unformalized. ------------------------------------------------------------------------ 7.2.3.1.1 (Static semantics of) Name Tests % 215 rule 13 / conclusion rule 26 / conclusion # prefix:local Change to italicized 'Prefix:LocalPart'. ________ % 216 rule 22 / premise 1 # fn:namespace-uri-from-QName(QName1) Change 'QName1' to 'expanded-QName1', and add > QName1 of attr expands to expanded-QName1 ------------------------------------------------------------------------ 7.2.3.1.2 (Static semantics of) Kind Tests % 217 throughout / many rules / conclusion # dynEnv |- Change 'dynEnv' to 'statEnv' ________ % 218 It would be nice if the division-headers of this section ('Document kind test', 'Element kind test', etc.) stood out more than the big bold 'Semantics' headers. ------------------------------------------ (Static semantics of) Document kind test % 219 Sem / rule 3 / premise 2 # (Type1 <: DocumentType or DocumentType <: Type1) Put 'or' in bold, since it's meta. ________ % 220 Sem / rule 3,4 / conclusion # document-node (Type) This is not a valid Type. Change to > document { Type } ------------------------------------------ (Static semantics of) Element kind test % 221 The "Semantics" header should be either one or two paras earlier. ________ % 222 Sem / rule 3 / conclusion Sem / rule 4 / premise 1 # element * TypeSpecifier This is not a valid Type. (Wildcards are allowed in XQuery Tests, not in Formal Types.) Delete the '*'. ________ % 223 Sem / rule 5 / premise 1, 2, 3, conclusion # ElementNameOrWildcard TypeSpecifier This is being used as if it's a Type, but it isn't. Change all to > element ElementName? TypeSpecifier ------------------------------------------ (Static semantics of) Attribute kind test % 224 The "Semantics" header should be either one or two paras earlier. ________ % 225 Sem / rule 3 / conclusion Sem / rule 4 / premise 1 # attribute * TypeSpecifier Change to: > attribute TypeReference and propagate. ________ % 226 Sem / rule 5 / premise 1, 2, 3, conclusion # AttribNameOrWildcard TypeSpecifier Change to: > attribute AttributeName? TypeReference and propagate. ------------------------------------------ (Static semantics of) Processing instruction, comment, and text kind tests. % 227 What, no "Semantics" header? ________ % 228 rule 4 / conclusion # test text() of with PrincipalNodeKind text Move the 'of': > test text() with PrincipalNodeKind of text ________ % 229 rule 6 # If none of the above rules apply, then the node test returns the empty # sequence and the following rule applies: # statEnv |- test node() with PrincipalNodeKind of NodeType : empty The premises are unformalized. ------------------------------------------------------------------------ 7.2.3.2.1 (Dynamic semantics of) Name Tests % 230 Sem / all rules / premise 1 # fn:node-kind( NodeValue ) = PrincipalNodeKind Italicize 'PrincipalNodeKind'. ________ % 231 Sem / rule 3 / premise 3 # fn:namespace-uri-from-QName( QName ) Change 'QName' to 'expanded-QName', and add > QName of attr expands to expanded-QName ________ % 232 Sem / rule 3 / conclusion # test prefix:* with PrincipalNodeKind of NodeValue Change 'prefix' to italicized 'Prefix'. ________ % 233 Sem / rule 4 / premise 3, conclusion # local Change 'local' to italicized 'LocalPart'. ------------------------------------------------------------------------ 7.2.3.2.2 (Dynamic semantics of) Kind Tests Processing instruction, comment, and text kind tests. % 234 Sem / rule 2 / premise 3, 4 # String Italicize 'String'. ________ % 235 Sem / rule 9 # If none of the above rules applies then the node test returns the # empty sequence, and the following dynamic rule is applied: # dynEnv |- test node() with PrincipalNodeKind of NodeValue => () The premises are unformalized. ------------------------------------------------------------------------ 7.2.4 Attribute filtering % 236 Sem / rules 1,2 / premise 1 # dynEnv |- Value1 of attribute:: => Value2 What kind of judgment is this? ________ % 237 Sem / rules 1,2 / premise 2 # dynEnv |- Value2 of "attribute", QName => ... Again, what kind of judgment is this? ------------------------------------------------------------------------ 7.3.1 Matches % 238 Sem / rule 1 / conclusion # AtomicValue of type AtomicTypeName Change 'AtomicValue' to 'AtomicValueContent'. ________ % 239 Sem / rule 2,3,4 / conclusion # String Italicize 'String'. ________ % 240 Sem / rule 8 / conclusion # attribute AttributeName of type TypeName { Value } Change '{ Value }' to '{ SimpleValue }'. ________ % 241 Sem / rule 15 / conclusion # statEnv |- empty matches Type* 'empty' is not a Value. Do you mean '()'? ------------------------------------------------------------------------ 7.3.2 Subtype and Type equality % 242 Note / para 1 # values which are not available at static type checking type. Change second 'type' to 'time'? ------------------------------------------------------------------------ 7.5.1 Type promotion % 243 1st Notation / judgment-form 1 # Type1 can be promoted to Type2 Prepend 'statEnv |-'. ________ % 244 1st Sem / rule 8 / premise 2 # prime(Type2) can be promoted to prime(Type2') Prepend 'statEnv |-'. ________ % 245 2nd Notation / judgment-form 2 # Value1 against Type2 promotes to Value2 Prepend 'statEnv |-' ________ % 246 2nd Semantics / rule 1 / premise 1, conclusion Prepend 'statEnv |-' ________ % 247 2nd Semantics / rule 2 / premise 3 # statEnv |- Type1 != Type2 Is statEnv needed for this judgment? ________ % 248 2nd Semantics / rule 2 / premise 4 # cast as Type2 (Value1) => Value2 Change to > statEnv |- (Value1) cast as Type2 => Value2 ------------------------------------------------------------------------ 7.6.2 Elements in validation context % 249 Sem / rule 1 / premise 3,4 Sem / rule 3 / premise 2 Sem / rule 6 / premise 3 # statEnv |- statEnv.elemDecl(expanded-QName) => define ElementType statEnv,elemDecl maps an expanded-QName to a Definition, but 'define ElementType' is not a valid Definition. Change it to 'define element ElementName Substitution? Nillable? TypeReference' and then construct ElementType out of those parts? ________ % 250 Sem / rule 2 / premise 1 Sem / rule 6 / premise 2 # element of TypeName Insert 'type' before 'TypeName'. ________ % 251 Sem / rule 2 / premise 2 # statEnv |- test element() with "element" of Type1 : Type2 Put 'with' in bold. ________ % 252 Sem / rule 2 / conclusion Sem / rule 6 / conclusion Sem / rule 7 / conclusion Sem / rule 8 / premise 2 + conclusion Sem / rule 9 / premise 2 + conclusion # in context type(TypeName) # in context ElementName2 # in context (SchemaGlobalContext "/" ... "/" SchemaContextStepN) # in context (SchemaGlobalContext "/" ... "/" SchemaContextStepN) "/" ElementName2 # in context SchemaContextStep # in context SchemaContextStep "/" ElementName2 These assume a syntax for ValidationContext that does not match the EBNF. Also, the slashes should not be in quotes. Similar problems in 7.6.3. ________ % 253 Sem / rule 3, 6, 7, 8, 9 # ValidationMode = "strict" or "lax" It would be better to say > ( ValidationMode = "strict" ) or ( ValidationMode = "lax" ) or > ValidationMode in { "strict", "lax" } At very least, 'or' should be in bold, since it's meta. Similar problems in 7.6.3. ________ % 254 Sem / rule 6 / premise 3 Sem / rule 7 / premise 5 Sem / rule 8 / premise 4 Sem / rule 9 / permise 4 # test ElementName with "element" of Type An ElementName is not a valid NodeTest. Also, put 'with' in bold. ------------------------------------------------------------------------ 7.6.3 Attributes in validation context % 255 Sem / rule 1 / premise 1,2 Sem / rule 3,4 / premise 1 # ... statEnv.attrDecl(AttributeName) ... The domain of statEnv.attrDecl is expanded-QName, so change to: > statEnv |- AttributeName of attr expands to expanded-QName > ... statEnv.attrDecl(expanded-QName) .. ________ % 256 Sem / rule 1 / premises 1,3 Sem / rule 3 / premise 1 # statEnv |- statEnv.attrDecl(AttributeName) => define AttributeType statEnv,attrDecl maps an expanded-QName to a Definition, but 'define AttributeType' is not a valid Definition. Change it to 'define attribute AttributeName TypeReference' and then construct AttributeType out of those parts? ________ % 257 Sem / rule 2 / premise 1 Sem / rule 6 / premise 2 # statEnv |- axis attribute:: of element of TypeName : Type1 Insert 'type' before 'TypeName'. ________ % 258 Sem / rule 5 / conclusion # resolves to element AttributeName Change 'element' to 'attribute'? ________ % 259 Sem / rule 6 / premise 3 Sem / rule 7 / premise 5 Sem / rule 8 / premise 5 Sem / rule 9 / premise 5 # test AttributeName An AttributeName is not a valid NodeTest. Maybe you want just QName. ________ % 260 Sem / rule 7 # statEnv |- statEnv.elemDecl(expanded-QName2) => define ElementType2 'define ElementType' is not a valid Definition. Change it to 'define element ElementName Substitution? Nillable? TypeReference' and then construct ElementType out of those parts? ------------------------------------------------------------------------ ------------------------------------------------------------------------ A.1 Core BNF % 261 Named Terminals # [18 (Core)] ElementContentChar ::= Char - [{}<&] - [{}<&] # [19 (Core)] QuotAttContentChar ::= Char - ["{}<&] - ["{}<&] # [20 (Core)] AposAttContentChar ::= Char - ['{}<&] - ['{}<&] Eliminate repetition (as in 4.7.1) ------------------------------------------------------------------------ ------------------------------------------------------------------------ E.1.1 Type resolution % 262 Notation / judgment-form 1 # statEnv |- (TypeReference | TypeDerivation) resolves to ... The '|' is meta. It would be better to declare the judgment-form twice, once for TypeReference and once for TypeDerivation. ------------------------------------------------------------------------ E.1.3.1 Simply erases % 263 Sem / rule 2 / premise 1,2 # statEnv |- SimpleValue1 simply erases to String1 SimpleValue1 != () Each is structured as a single premise, but presuambly should be two. ________ % 264 Sem / rule 3 / conclusion # AtomicValue of type AtomicTypeName Change 'AtomicValue' to 'AtomicValueContent' ------------------------------------------------------------------------ E.1.4.1 Simply annotate % 265 Notation # statEnv |- simply annotate as SimpleType ( SimpleValue ) => ... SimpleValue is in the EBNF but not SimpleType. ________ % 266 Sem / rule 2 / premise 1 # statEnv |- (...) fails Change to: > not( statEnv |- ... ) ------------------------------------------------------------------------ E.1.4.3 Annotate % 267 Sem / rule 1 / conclusion # annotate as () (()) => () Change the first '()' (the Type) to 'empty'. ________ % 268 Sem / rule 10,11,12 / last premise # nil-annotate as Type Nillable? Change to: > nil-annotate as Nillable? Type ________ % 269 Sem / rule 11 / premise 1 # Value filter @xsi:type => TypeName The 'filter' judgment "yields" a SimpleValue, but a TypeName is not a SimpleValue. ________ % 270 Sem / rule 11 / premise 2 # statEnv |- XsiTypeReference = of type TypeName Is statEnv needed for this judgment? ________ % 271 Sem / rule 15 / premise 1 # String as xs:anySimpleType Syntactically, what is the 'as xs:anySimpleType'? XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -Michael Dyck XQuery 1.0 and XPath 2.0 Formal Semantics W3C Working Draft 20 February 2004 I think the normalization rules for some and every are incorrrect: the call of the function fn:boolean is missing in the satisfies clause. I would write the normalization rule like this: [some VarRef1 in Expr1, ..., VarRefn in Exprn satisfies Expr]Expr == some VarRef1 in [Expr1]Expr satisfies some VarRef2 in [Expr2]Expr satisfies ... some VarRefn in [Exprn]Expr satisfies fn:boolean([Expr]Expr) Same for every. Minor typo: the two last 'satisfies' keyword are mispelled: satist fies(note the t before f) Best regards, Lionel Section 3.1.1 Technical The static context should allow $fs:dot and friends to be predefined by an implementation with an implementation-provided static type in the outermost static variable declaration context. Section 3.4.4 Technical Why does normalization of [document-node(ElementTest)]sequencetype include text in the result? This seems wrong since text should be part of the original type, shouldn't it? Also, you should not have the node types written with (). Section 3.4.4 Technical " [processing-instruction()]sequencetype == processing-instruction ? ": Why ? ? Section 3.4.1 Editorial/Technical Why is xdt:untypedAtomic not part of the definition of xs:anySimpleType. Please explain. Also, why is xs:anySimpleType not defined as xdt:anyAtomicType*? Section 3.1.2 Technical Please replace "It is a static error to define a variable in the "fs" namespace." with "Since there is no namespace URI associated to the fs prefix, users cannot refer to such variables" Note that a user should be allowed to define a prefix called fs with any namespace he/she chooses. The point of not associating a namespace with the definitional prefixes is that there will be no need to check for a clash or disallow them. Section 3.4.1 Editorial/Technical Could "( element? & text? & comment? & processing-instruction? )* " be written as the simpler "( element | text | comment | processing-instruction )* "? MS-FS-LC1-037 Section 3.1.1 Editorial Replace "expaned " with "expanded " MS-FS-LC1-038 Section 3.1.1 Editorial "in-scope type definitions": Do we need to change this name as we did for "static in-scope namespaces"? [Note this comment has been addressed at the F2F and is submitted for completeness reasons.] MS-FS-LC1-040 Section 3.1.1 Editorial typeDefn, attrDecl, elemDecl: Mention that they are containing the imported schema components. MS-FS-LC1-041 Section 3.1.1 Editorial Default collation should be a member of the collation environment. MS-FS-LC1-042 Section 3.1.1 Editorial Replace "corresopnds " with "corresponds " MS-FS-LC1-043 Section 3.1.1 Editorial Please rename " fs:get_ns_from_items" to " fs:get_static_ns_from_items" to avoid confusion with dynamic in-scope namespaces MS-FS-LC1-046 Section 3.1.2 Editorial $fs:dot can be referred to by the user using the XPath expression ".". Please add. MS-FS-LC1-049 Section 3.4.1 Editorial There are still references to xdt:untypedAny. Should be replaced with xdt:untyped. MS-FS-LC1-050 Section 3.4.1 Editorial "derived derived " -> "derived " MS-FS-LC1-051 Section 3.4.1 Editorial The definition of fs:numerics seems broken. Also, do we really need to make it a type or can we just do the explosion into all the possible types? MS-FS-LC1-052 Section 3.4.3 Editorial "SequenceTypes can be used in [XPath/XQuery] to refer to a type imported from a schema ": Obviously it can refer to other types such as comment or element(a, b) that are not imported from a schema! MS-FS-LC1-054 Section 3.4.4 Editorial Also show NCName rule (not only string) for pi or do it in mapping to the Core syntax! Section 3.1.2 Technical Why should the function body be part of the dynamic context. This seems to imply that I can change the function definition at runtime or based on an input value. I understand that we do not need the body of a function during compilation of a query, but I would assume that you do not have dynamic semantics with respect to function resolution. Maybe we need a special linking environment to contain this information? Section 3.1.1 Technical Why can't the default function namespace be the empty namespace? Call it out either way. Section 3.1.1 Editorial/Technical Are the namespace and collation environments already base URI adjusted? Please call out either way. Note that the term URI means an absolute URI which seems to indicate that base URIs have been applied. Section 2.3.4 Editorial/Technical Why is <xsd:attribute mapped to an optional attribute? The instance always will have this attribute with the fixed value. Section 2.3.4 Editorial/Technical " define type Section mixed { (element h1 of type xsd:string | element p of type xsd:string | element div of type Section)* } ": The Type grammar production does not allow "(", ")". Needs to be fixed in the Type grammar production. Section 2.3.4 Editorial/Technical "A type declaration with simple content derived by restriction define type SKU restricts xsd:string { xsd:string } ": How does this work? If SKU is complex, how can it restrict from an atomic type? If it is a restriction from an atomic type, then the {xs:string} should not be there. Section 2.3.2 Editorial/Technical "A name alone corresponds to a reference to a global element or attribute declaration. ": This is confusing. Why should this be used as a schema lookup? Shouldn't this be part of the schema import and inference results and denote something along the line of element a == element a of type xs:anyType?! Also affects the example and others: The following is a reference to a global attribute declaration attribute sizes Especially with the resolution of the ISSD lookup semantics, we think that the static type system should use explicit types and use another syntax in the static type system for references such as ref a/ref-attr a... Section 2.3.4 Editorial/Technical "When the type derivation is omitted, the type derives by restriction from xs:anyType, as in:": Doesn't it derive from xdt:anyAtomicType? since it is an atomic type? Section 2.3.1 Editorial/Technical Representation of content models: You are missing sequences (,). Section 2.3.3 Technical Please provide the precedence rules for the ItemType grammar. Especially since no parens are allowed. For example, element a, element b | element b, element a can be interpreted in different ways. Section 2.2 Editorial/Technical Please mention namespace nodes as an XPath 2.0 specific node. Section 2.3.1 Editorial/Technical "For the purpose of static typing, the [XPath/XQuery] type system only describes minOccurs, maxOccurs combinations that correspond to the DTD operators +, *, and ?. ": Also add that one looks at minLength/maxLength for list types. MS-FS-LC1-003 Section 2.1.1 Editorial Introduce the notion and purpose of XQuery Core before talking about its syntactic notation. MS-FS-LC1-004 Section 2.1.1 Editorial Please replace "language's semantics" with "languages' semantics" (there are two!) MS-FS-LC1-005 Section 2.1.2 Editorial "Expr raises Error": Add note that bold names refer to other judgments. In this case it is an auxiliary judgment. MS-FS-LC1-006 Section 2.1.3 Editorial Final Note is more confusing than helpful. It is not an algorithm since the rules are descriptive. Please remove. MS-FS-LC1-007 Section 2.1.4 Editorial Using "env" for environment group is confusing. MS-FS-LC1-008 Section 2.1.4 Editorial The scoping of some of the environments (variables, namespaces) is not understandable from the single sentence "Updating the environment captures the scope of a symbol ". Please improve or give better example. MS-FS-LC1-009 Section 2.1.5 Editorial Composition of several inference rules should be explained. MS-FS-LC1-010 Section 2.2 Editorial Please do not use the term "value" as a synonym for "item" or "sequence of items" (see similar comments on other documents). Thus rename the section of 2.2 to "XML items" and perform other such alignments throughout the document. MS-FS-LC1-012 Section 2.2 Editorial Please replace "Elements have a type annotation and contain a value" with "Elements have a type annotation and contain a sequence of items" MS-FS-LC1-013 Section 2.2 Editorial Replace "Text nodes always contain one string value of type xdt:untypedAtomic, therefore the corresponding type annotation is omitted." with "Text nodes always contain one string value of type xdt:untypedAtomic, therefore the corresponding type annotation is omitted in the formal notation of a text node." MS-FS-LC1-014 Section 2.2.1 Editorial Replace "Notably sequences cannot be nested." with "Notably sequences are automatically flattened." Since you can write nested sequences syntactically: they are just flattened. MS-FS-LC1-015 Section 2.2.2 Editorial Replace "true as xsd:boolean " with "true as xs:boolean " MS-FS-LC1-019 Section 2.3.2 Editorial Replace "Generic node types (e.g., node()), " with "Generic node types such as used in the SequenceType production (e.g., node()), " MS-FS-LC1-020 Section 2.3.2 Editorial Replace "The following is a type for a nillable element of type string" with "The following is a type for a nillable element of type string with name size" MS-FS-LC1-022 Section 2.3.3 Editorial Replace "with lower bound 0 or 1, and upper bound 1." with "with minOccurs 0 or 1, and maxOccurs 1." MS-FS-LC1-023 Section 2.3.3 Editorial Replace "The "&" operator builds the "interleaved product" of two types. For example, (element a & element b) = element a, element b | element b, element a " with "The "&" operator builds the "interleaved product" of two types. For example, (element a & element b) which is equivalent to element a, element b | element b, element a " MS-FS-LC1-025 Section 2.3.4 Editorial "element street of type xsd:string*": Explain that this is sequence of element with name street! People may bind * to xsd:string. Also, please use xs:string and not xsd:string. MS-FS-LC1-029 Section 2.4.1 Editorial "In XQuery, static the input context may be defined by the processing environment and by declarations in the Query Prolog ": Fix the sentence. MS-FS-LC1-030 Section 2.4.1 Editorial "Evaluation works by bottom-up application of evaluation rules over expressions, starting with evaluation of literals and variables.": From a query processing point of view, the expressions (such as path expressions) are actually evaluated top-down. Please explain based on an example and call out that the bottom-up application actually still may lead to a top-down evaluation of an expression such as /a/b/c[@d=42]. MS-FS-LC1-031 Section 2.4.1 Editorial/Conformance "Statically typed implementations are required to find and report type errors during static analysis, as specified in this document. ": This should be qualified as subject to implementations inferring more precise types. Also, some people read it as that the error messages are normatively given in the specification. Please write this clearer. MS-FS-LC1-032 Section 2.4.2 Editorial Please call out that some of the parts of the syntax is hotlinked (since it is not easily visible on a printed page or even on a browser). MS-FS-LC1-033 Section 2.4.2 Editorial "FWLR" -> "FLWOR" MS-FS-LC1-034 Section 2.5.1 Editorial "xdt: for XML Schema components and built-in types.": Actually the XQuery/XPath built-in types. Please fix. MS-FS-LC1-035 Section 2.5.2 Editorial "fn:distinct-nodes(node*) as node*": Has been cut. Please use a function that still exists. Also please use the F&O formatting for generic types. Section 1.1 Normative and informative Sections Technical Make mapping of XSD into type system normative as far as specified. Otherwise we specify normative static semantics on top of non-normative basics. MS-FS-LC1-001 Section 1 Introduction Editorial In section starting with "[XPath/XQuery] is a typed language. ": Make a better distinction between static type inference (inferring the static type) and static type checking for discovering errors. There are a number of places in the Formal Semantics spec where it chooses a static type that is more general than the most specific type that is deducible. While we understand that an implementation is free to extend static typing to produce more specific types than the Formal Semantics spec says [1], we think it is important for the spec to choose the most specific type possible. Examples of places in the Formal Semantics where a more-specific type is desirable: ---------------------------------------------------- 1. SECTION 4.8.2 for expression 431: for expression type Under static type analysis, when a type declaration is present, the static environment should be extended to typing VarRef1 with type Type1, NOT type0. ---------------------------------------------------- 2. SECTION 4.8.3, let expression 432: let expression type bug The let expression variable has its declared type, instead of its actual type, entered into the static environment. Under static type analysis, when a type declaration is present, the static environment should be extended to typing VarRef1 with type Type1, NOT type0. ---------------------------------------------------- 3. SECTION 5.12, Function Declaration 454: function return type vs declared return type Example: declare function foo() as xdt:anyAtomicType {return 3;} foo() + 4 When doing static type analysis on foo() +4, should foo()'s static type be xdt:anyAtomicType or xs:integer ? According to the current spec, foo()'s static type is xdt:anyAtomicType. Then foo() +4 raises static type error. foo()'s static type should be xs:integer so that foo() +4 does NOT raise static type error. ---------------------------------------------------- 4. SECTION 6.2.8, fn:reverse function 442: explain why factored type is used here bug/editorial It may not be obvious why the static Type analysis for fn:reverse() function actually returns the factored type of the function's input sequence. For example, consider: fn:reverse((1, 'abc')). The input sequence type is (xs:integer, xs:string). After applying fn:reverse(), its most precise type should be (xs:string, xs:integer). If we compute the factored type, its type is (xs:string|xs:integer)+ , which is NOT as precise as (xs:string, xs:integer). Suggest: either return the most precise type (preferred solution), or add some text to explain why a factored type is used here. ---------------------------------------------------- [1] From the Formal Semantics spec: 3.6.7 Static Typing Extensions For some expressions, the static typing rules defined in the Formal Semantics are not very precise (see, for instance, the type inference rules for the ancestor axes — parent, ancestor, and ancestor-or-self— and for the function fn:root function. Some implementations may wish to support more precise static typing rules. A conforming implementation may provide a static typing extensionXQ, which is a type inference rule that infers a more precise static type than that inferred by the type inference rules in the Formal Semantics. Given an expression Expr, the type of Expr given by the extension must be a subtype of the type of Expr given by the Formal Semantics. " These are Oracle's Formal Semantics Last Call comments, editorial. By "bugs" we mean mistakes in the text. "bugs" differ from "editorial" in that "bugs" significantly change the meaning of the document. Each comment has a header, with a number, the section, section title, an internal-to-Oracle comment number, and a 1-line summary. ---------------------------------------------------- B1. SECTION F9.3, Sequence Group 429: Sequence Vs Choice Group bug Immediately below schema mapping section, "Choice groups" should be "Sequence groups". ---------------------------------------------------- B2. SECTION 7.2.2.1, Static Semantics of axes 443: axis:child:: of document {Type} should include text * bug The inference rule for child axis of document node type should include text nodes as well. statEnv |- axis child:: of document {Type} : Type & processing-instruction *& comment* & text* Note text* is missing here. ---------------------------------------------------- B3. SECTION 7.2.3.2, Dynamic semantics of node tests 449: bug in the example conclusion bug For the 1st example, element bar:c of type xs:int {5} should be element foo:d of type xs:int {5} since we test for foo:* ---------------------------------------------------- B4. SECTION 3.5.3, Errors and Optimization 452: add 'NOT' in the last sentence of the section bug The last sentence of 3.5.3 says "In the example above, a static type error would be raised because a path expression may be applied to an atomic value" It should be "In the example above, a static type error would be raised because a path expression may NOT be applied to an atomic value" ---------------------------------------------------- B5. SECTION 3.1.2, Dynamic Context 462: dynEnv.varValue can also map an expanded variable name to #EXTERNAL bug In addition to Value or #IMPORTED(URI), an expanded variable name can also be mapped to #EXTERNAL. ---------------------------------------------------- B6. SECTION 3.4.4, Sequence Type Matching bug 453: normalization of document-node(ElementTest) There should be NO () after processing-instruction, comment, text types when defining normalization of document-node(ElementTest) because XQuery formal semantic type has no () for processing-instruction, comment, text types. ---------------------------------------------------- B7. SECTION 4.7.1, Direct Element Constructor 404: Normalization rule for Direct element constructor bug The normalization rule for a Direct element constructor distinguishes between the constructor having one element-content unit from the one having more than one element-content unit as illustrated in 4.7.1 section. The contructor containing one element-content unit preserves the typeinfo. So <date>{xs:date("2003-03-18")}</date> == element date {xs:date("2003-03-18")} This rule appears to contradict the XQuery 1.0 language spec section 3.7.1.5, type of a constructed element. In XQuery 1.0, there is no distinction between a construct having one element-content unit vs the one having more than one element-content unit. If we plan to implement Michael Kay's suggestion (documented as an editorial note in section 4.7.1), then the XQuery 1.0 spec needs to be changed to accommodate this. ---------------------------------------------------- B8. SECTION 7.6.3, Atributes in validation context 466: inference rule for the case of no attribute name present bug The inference rule for the case of no attribute name present is not right. The rule is stated as: statEnv |- axis attribute:: of element of TypeName: Type1 ------------------------------------------------------------ statEnv |- in context type(TypeName) with mode ValidationMode resolves to prime(Type2) The conclusion should refer to Type1, NOT Type2. ---------------------------------------------------- These are Oracle's Formal Semantics Last Call comments, editorial. By "editorial" we mean typos, plus suggestions on markup and wording to make the spec more readable. Each comment has a header, with a number, the section, section title, an internal-to-Oracle comment number, and a 1-line summary. ---------------------------------------------------- E1. SECTION F10: Particles 430: Typo editorial "Particles can be either and element reference ..." "and" should be "an" ---------------------------------------------------- E2. SECTION 4.4: Arithmetic Expressions, Core Grammar 433: Typo editorial "There are no Core grammar rules for value comparisons ..." Should be: "There are no Core grammar rules for arithmetic expressions ..." ---------------------------------------------------- E4. SECTION 4.12.4 Castable, Normalization 437: Castable normalization rule is specified twice (cut-paste error ?) editorial The "normalization of castable" expr rule is listed twice. ---------------------------------------------------- E5. SECTION 7.2.2.1, Static Semantics of axes 446: Suggest subsections under 7.2.2.1 editorial Please subsection 7.2.2.1 based on the axes access section (child axes, parent axes, attribute axes etc.) for readability. ---------------------------------------------------- E6. SECTION 7.2.2.1, Static Semantics of axes 444: possibly missing words 'type of' editorial "The static type of the attribute axis is similar to the static the child axis." Should be ? "The static type of the attribute axis is similar to the static [type of] the child axis." ---------------------------------------------------- E7. SECTION 2.1.3, Notations for inference rules 448: inference rules need clearer markup editorial The inference rules are written as a collection of premises and a conclusion, written respectively above and below a dividing line. Having seen these rules in different fonts, browsers, print-outs, it is clear that some more markup is needed to make it easier to read the rules. Suggest: draw a box around the entire rule and/or make some other markup around the entire rule, such as background shading. When there are multiple inference rules listed together, it is very hard for the reader to mentally draw the boundary around each inference rule. This is especially true for inference rules without premises. When there are many inference rules, some of which have no premises and some of which have premises, listed in a cluster under a section, it is hard for reader to separate the inference rules. See especially 7.2.3.1.1, 7.2.3.1.2, 7.2.3.2.2 ---------------------------------------------------- E8. SECTION 7.2.3.1.2, kind tests 450: possible mis-wording in 7.2.3.1.2, 3rd paragraph editorial "After normalization of the kind test an XQuery type, the expression's type is compared to the normalized XQuery type." Should be ? "After normalization of the kind test [as] an XQuery type, the expression's type is compared to the normalized XQuery type." ---------------------------------------------------- E9. SECTION 3.4.1, Predefined Types 458: xdt:untyped definition editorial The definition of xdt:untyped includes xdt:untypedAny. Presumably this was overlooked when changing xdt:untypedAny to xdt:untyped. ---------------------------------------------------- E10. SECTION 6.1.4, The fs:convert-simple-operand function 465: reference to convert_untypedAtomic editorial fs:convert-simple-operand function makes a reference to a function convert_untypedAtomic. Please make a reference to 6.2.6 where convert_untypedAtomic is defined. convert_untypedAtomic is also used in 6.2.1, with no reference to a definition. ---------------------------------------------------- E11. SECTION 3.6.7, Static Typing Extensions 464: There is no definition of '<:' in 3.6.7 editorial Suggest: add a reference to the definition of '<:' in 7.3.2. ---------------------------------------------------- E12. SECTION 4.1.4, Context Item Expression 455: static type analysis missing for context item expression editorial In the introduction section, it states that the context item may be either a node or an atomic value. However, it does NOT explcitly state that this as an inference rule in a 'static type analysis' rule. ---------------------------------------------------- E13. SECTION 6.2.2, The fn:collection and fn:doc function 439: The first premise of the first inference rule for fn:doc() is done twice editorial StatEnv |- statEnv.docType(string) = Type premise for fn:doc() first inference rule is listed twice ---------------------------------------------------- E14. SECTION 7.2.3.2, Dynamic semantics of node tests 469: Singular used when plural required editorial In the second paragraph under Semantics, the phrase "are similar to those for axis" should read "are similar to those for axes". ---------------------------------------------------- E15. SECTION 4.7.1, Direct Element Constructors 467: Two typos editorial Under Normalization, the paragraph beginning "So to preserve useful type information" includes the phrase "constructor's that contain"; the apostrophe between the "r" and "s" is incorrect, because the constructor being discussed does not posess "that contain"...it's merely a plural form of "constructor". In the same paragraph, the last sentence reads "Here is the normalization of the first and fourth examples above:" There are two things wrong with this sentence. First, it's unclear whether "first" and "fourth" are counting in reverse document order ;^) or in document order starting at the subheading "Normalization". More importantly, the shaded text that follows that sentence makes it clear that the examples are not the "first and fourth", but either the "second and fourth" (if in reverse document order) or the "first and third" (if in document order from the subheading). ---------------------------------------------------- E16. SECTION 4.7.3.6, computed comment constructors 436: computed processing-instruction editorial Under the normalization section, the first 3 words "Computed processing-instruction should be "Computed comment" ---------------------------------------------------- E17. SECTION 4, Expression 456: static type analysis for expressions with empty type clarification It is a static type error for most (but not all) expressions to have the empty type. Is '3+()' a static type error ? We expect NO, because + is normalized into fs:plus() (6.1.1), which can return empty type. Does 'local namespace declaration' defined in 4.7.4 raise a static type error, as the static type analysis yields empty type ? ---------------------------------------------------- E18. SECTION 3.4.1, predefined types 457: definition of fs:numeric editorial The definition of fs:numeric does not appear to be right. The usage of '&' and ';' and usage of _ do not seem to be right. It should be: define type fs:numeric restricts xs:anyAtomicType { xs:decimal | xs:float | xs:double} ---------------------------------------------------- E19. SECTION 2.3.3, Content models 461: '&' operator and XML SChema All editorial The definition in Formal Semantics of '&' operator is, (element a & element b) = element a, element b | element b , element a The text says "The interleaved product represents all groups in XML Schema. But "all groups in XML Schema" would result in: (element a & element b) = element a, element b | element b , element a | element a | element b | empty Suggest changing this text" "The interleaved product represents all groups in XML Schema. All groups in XML Schema are restricted to apply only on global or local element declarations with lower bound 0 or 1, and upper bound 1." to e.g. "The interleaved product represents all groups in XML Schema, restricted to apply only on global or local element declarations with lower bound 0 or 1, and upper bound 1." ---------------------------------------------------- E20. SECTION 7.2.3.2.2, kind tests 451: The conclusion of the inference rule for comment() test is incomplete editorial When the Formal Semantics document is printed (using Netscape), the inference rule for comment nodes is printed as: not(fn:node-kind(NodeValue) = "comment") ----------------------------------------------------- () The () conclusion should be replaced as dynEnv|- test comment() with PrincipalNodeKind of NodeValue => () When the Formal Semantics document is displayed directly in a browser (including Netscape), the conclusion displays as expected. ---------------------------------------------------- E21. SECTION 6.2.7, 6.2.8, fn:remove and fn:reverse function 441: 'factored type' not defined editorial There is no definition of 'factored type'. Suggest defining it in 7.4 where prime type and its computation is defined, and then reference this definition wherever 'factored type' is used. ---------------------------------------------------- E22. SECTION 5.11 - variable declaration 434: no 'static type analysis' section editorial There is no 'static type analysis' section for Variable declaration. ---------------------------------------------------- E23. SECTION 6.2.6, The fn:min, fn:max, fn:avg, and fn:sum functions 468: Incomplete list of types for fn:mix and fn:max editorial Under the subheading "Semantics", the paragraph starting "Instead of writing a separate judgement", the second sentence deals with fn:min and fn:max. That sentence has a list of data types that incorrectly omits the types xs:date, xs:time, and xs:dateTime. As evidenced by F&O section 15.4.3, fifth paragraph, sentence beginning "If date/time values do not have a timezone...", F&O properly handles fn:max and fn:min applied to those three types. Please add these three types to the list in the second sentence. (Do not add it to sentences dealing with fn:avg and fn:sum, of course.) ---------------------------------------------------- E24. SECTION 2.3.2, Item types 460: element content nodes editorial Section 2.3.2 says: "We distinguish between document nodes, attribute nodes, and nodes that can occur in element content (elements, comments, processing instructions, and text nodes), as we need to refer to element content nodes later in the formal semantics." The spec refers to "element content type" elsewhere, but not "element content NODES". ---------------------------------------------------- Hi, a question dealing with the function fs:convert-operand as described in <> . When having statEnv <> .xpath1.0_compatibility <> = false, type($actual) = String and type($expected) = Integer no conversion is done, right? Considering a general comparison of a string and an integer value. According to the formal semantics of the general comparison, convert-operand would be called for both values but no conversion would take place actually. So I end up with a comparison of a string and an integer that is not defined (no overloaded comparison function exists taken these two types). That would force me to throw a XP0004 or XP0006 error due top the incompatible types, right? Thanks & Cheers Daniel Hello, The following is a comment from IBM on the Last Call working draft of Formal Semantics. Section 6.2.9 The last inference rule for fn:subsequence indicates that the types of the second and third arguments must be xs:integer. In fact, from F&O Section 15.1.3, they must be of type xs:double..7 The inference rule for fn:remove doesn't seem correct. For example, for fn:remove(1,1), the rule yields type xs:integer. The result type should not simply use the quantifier of the type of the first argument, as the operation may remove an item from the sequence. If quantifier(Type) is "1" or "?", the inferred type of the result should be prime(Type)?; if quantifier(type) is "+" or "*", the inferred type of the result should be prime(Type)*. between the two inference rules for fn:sum indicates, "The second form of fn:sum takes one argument, with an implicit second argument of the integer value 0. In this case, the result type is the union of target type T and xs:integer." However, F&O section 15.3.5 states "If the converted sequence is empty, then the single-argument form of the function returns the xs:double value 0.0e0." The inference rule here needs to be made consistent with the description in F before the first inference rule states, "When the function variable F is fn:min or fn:max, the target type T must be one of xs:decimal, xs:float, xs:double, xdt:yearMonthDuration, or xdt:dayTimeDuration." However, according to F&O section 15.3.3 and 15.3.4, these functions also accept arguments of type xs:date, xs:time and xs:dateTime. 4.4 The normalization rule for the unary arithmetic operators is problematic for values of type xs:float and xs:double. If the operand of unary minus is positive zero, the result produced using the usual IEEE.738 rules for arithmetic ought to be negative zero, but the result of the normalized operation 0.0-(+0.0) will be positive zero. Similarly, if the operand of unary plus is negative zero, the result ought to be negative zero, but the result of the normalized operation 0.0+(-0.0) will be positive zero. 3.1.2 The penultimate paragraph refers to the "fs" namespace, and indicates that it is a static error to define a variable in that namespace. However, section 2.5.1 indicates that the "fs:" prefix is a notational device, and that there is no namespace that corresponds..1.3 Some inference rules are written as if the order of the premises is significant, using terms like "first", "second", "then" (see last paragraph of section 2.1.5, for example). If the order is signficant, that should be stated here. Otherwise, those inferences should not be written as if the order was significant. Thanks, Henry (on behalf of IBM) ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto: Both the Formal Semantics and the internal XQuery specification seem to allow empty text nodes, but this conflicts with the Data Model: 6.7 Text Nodes 6.7.1 Overview Text nodes must satisfy the following constraint: 1. A text node must not contain the empty string as its content. The formal semantics uses empty text nodes so it can suppress spaces between adjacent enclosed expressions: 4.7.1 Direct Element Constructors Normalization [ElementContent1 ..., ElementContentn]ElementContent-unit, n > 1 == fs:item-sequence-to-node-sequence([ ElementContent1 ]ElementContent , text { "" }, ..., text { "" }, [ ElementContentn]ElementContent 4.7.1.1 Attributes Normalization [ AttributeValueContent1 ..., AttributeValueContentn]AttributeContent-unit, n > 1 == fs:item-sequence-to-untypedAtomic( [AttributeValueContent1]]AttributeContent , text { "" }, ..., text {""}, [ AttributeValueContentn]AttributeContent Note also that the rule for Dynamic Evaluation of Text Node Constructors does not check for an empty string. Only the static typing consider this by returning 'text?'. Also, the informal XQuery specification has a problem here: 3.7.3.4 Text Node Constructors 2. If the result of atomization is an empty sequence, no text node is constructed. ... However, it says nothing about a content expression that consists of a single empty string. I suggest removing the above sentence, and modifying the 3rd rule: 3. The individual strings resulting from the previous step are merged into a single string by concatenating them with a single space character between each pair. If the resulting string is the empty string, no text node is constructed (and the expressions returns an empty sequence). Otherwise, the resulting string becomes the content of the constructed text node. Alternatively, change the data model to allow empty text nodes. I don't see any particular reason to prohibit them, even if they're not in the XML infoset. Just replace the sentence "In addition, document and element nodes impose the constraint that two consecutive text nodes can never occur as adjacent siblings" by adding "nor can any of their text node children have the empty string as the content". My recommendation: 1. Change the Data Model to allow empty text nodes. 2. Change the Text Node Constructor to allow empty string content. 3. Change element and ocument constructor normalization so that they remove empty text nodes *and* combine adjacent text nodes. -- --Per Bothner per@bothner.com This is a request for the rationale as to why the quantifiers for "()" and "none" changed between Aug 2002 and May 2003. In Aug 2002: quantifier(()) = 0 quantifier(none) = 0 In May 2003, this changed to: quantifier(()) = ? quantifier(none) = 1 Why? The current version seems rather odd. Why should quantifier(none) be 1? - Paul BEA Systems, Inc. The normalization of FLWOR clauses seems wrong. Specifically, the 'for' normalization in 4.8.1 following the sentence: Therefore, a ForClause is normalized to nested for expressions: appears to be incorrect. Consider: for $i in 0 to 9 for $j in 0 to 9 stable order by $j return 10*$i+j This yields the tuple sequence: (i=0,j=0), (i=0, j=1), (i=0, j=2) ... so the sorted result should be: 0, 10, 20, ..., 1, 11, 21, ... The "normalized" expression is: for $i in 0 to 9 (for $j in 0 to 9 stable order by $j return 10*$i+j) This performs 10 independents sorts, none of which change the order, so the result is as if there was no order by clause: 0, 1, 2, ..., 10, 11, 12, ... -- --Per Bothner per@bothner.com MS-FS-LC1-058 Section 4.1.3 Editorial "It is a static error for any expression other than () to have the empty type (see [4 Expressions].)": Make this statement less absolute: "Often it is a static error for any expression to have the empty type. See [4 Expressions] for the detailed rules." MS-FS-LC1-059 Section 4.1.5 Editorial "xdt:anyAtomic" -> "xdt:anyAtomicType" MS-FS-LC1-061 Section 4.1.5 Editorial Instead of listing prototypical values, just explain what it is being used for, MS-FS-LC1-063 Section 4.1.5 Editorial Please clarify when dispatch based on provided union type is needed. Most function dispatches are based on arity and not type and thus would not lead to a successful application of this rule. MS-FS-LC1-066 General Editorial Please provide op: functions with links to their definition in the F&O spec. MS-FS-LC1-067 Section 4.2.1/4.3.2 Editorial It would be better to combine the rules into one section. MS-FS-LC1-069 Section 4.2.1 Editorial The remainder of the static semantics of axis and node test is given in section 7. But there is no link visible that takes one there and it is not clear in this section how the semantics play out. Either move parts up or add references. MS-FS-LC1-105 Section 4.3.3 Editorial Please link to section 6.2.10 where the op:union et al are defined instead to 6. MS-FS-LC1-112 General Editorial/Technical Please also refer to the section where the dynamic semantics of the functions is specified. Not only the function invocation! MS-FS-LC1-115 Section 4.7.1 Editorial "The next example contains six element-content units:": depends on the XQuery boundary whitespace handling rules since you have boundary whitespace in your query. MS-FS-LC1-117 Section 4.7.1 Editorial The section numbering needs to be better. Elements do not have their own subsections but attributes do. Others are in separate 4.7.2. MS-FS-LC1-124 Section 4.7.3.4 Editorial "The static semantics checks that the argument expression has type xs:string." This is guaranteed by the normalization. So why do we need this condition? MS-FS-LC1-128 Section 4.8 Editorial There are several places (including the title) where we refer to FLWR instead of FLWOR. Please fix. MS-FS-LC1-131 Section 4.12.2 Editorial Replace " an optional VarRef, which is bound the value of the input expression" with " an optional VarRef, which is bound to the value of the input expression" Appendix B.2 Technical The table implies static types but the semantic described is the dynamic semantics. This needs to be rectified (for example, what if the static type of A is xs:integer? ? What is the result of () eq () (static error if static typing? () dynamically.). MS-FS-LC1-109 Appendix B.2 Editorial Please replace "Gregorian" with "Australian" :-) MS-FS-LC1-111 Appendix B.2 Editorial fs:minus: Decimal -> xs:decimal Section 6.2.10 Editorial Some spurious ; in judgment for op:intersection Appendix B.2 Technical fs:is-same-node: Here you indicate static types. use this for the other ops above as well and provide static and dynamic semantics. Also fix node to be node(). Section 4.2.2 Technical Dynamic dispatch of positional predicate is problematic for performing implementations. See also comments MS-XQ-LC1-065, MS-FS-LC1-114. Section 4.3.1 Technical Do we allow expr1 to expr2 where either expr1 or expr2 have a static type of atomic? ? If so, we either need to extend the op:to signature or make it clear that this is allowed during normalization. Section 4.7.2 Technical CDATA should not be treated as a constructor. The XQuery parser should take care of it before it gets to the F.S. level. Section 4.7.1 Technical The whole business of distinguishing element content to sometimes preserve content type is incompatible with the dynamic semantics of the construction. With the three current validation modes on construction, we cannot infer statically some preserved type of the content. Please remove this complexity from the spec. Note: Even with preserve mode, we cannot preserve atomic type information. This will also simplify the static typing of construction and make the section easier to understand. Section 4.3.2 Technical In the case of a literal numeric, shouldn't we have also have a special normalization rule for last() Section 4.6. Technical As mentioned in MS-FS-LC1-068, we should not use the full power of fn:boolean() since we need to be able to statically dispatch on predicates. Section 4.7.2 Technical The construction of comments and Pis uses the attribute content normalization that will perform entitization and escaping. This should not occur when constructing comments and Pis. Instead certain characters sequences (- -- etc) need to be treated specially in the context of comment constructions. Sections 4.7.3.1/2, 4.13 - We will not review these sections due to the changes with respect to validation and construction. We expect them to be submitted separately for a review once the changes have been made. Section 4.7.3.3 Technical The document node constructor should not erase type information on its content. It should keep that information (always). Section 6.1.6 Technical fs:item-sequence-to-node-sequence(): items_to_nodes() should collapse atomictype*/+ to text and text*/+ to text. Section 4.7.3.5 Technical Normalization is using two things not available in XQuery: A union sequence type (just call this out), and casting from string to QName (please fix). Since we raise an error if the QName is not an NCName, we should instead use NCName (which happens to be a subtype of xs:string). This would simplify the normalization and still give us type safety. Section 4.8.4 Technical The static type for order by seems wrong: Assume that element person can contain multiple phone elements of type xs:int. Then the following expression should be a static error: for $p in /person order by $p/phone return $p Since there is more than one phone per person. Also we need rules to deal with type promotion of numeric, untyped and other types. Section 4.12.4 Editorial/Technical Replace "[Expr castable as AtomicType]Expr == let $v as xdt:anyAtomicType? := fn:data([ Expr ]Expr) return $v castable as AtomicType? " with "[Expr castable as AtomicType ?]Expr == let $v as xdt:anyAtomicType? := fn:data([ Expr ]Expr) return $v castable as AtomicType? " Section 4.12.5 Technical Constructor functions now cast to T?, thus we request the following normalization: [AtomicType(Expr)]Expr == [(Expr) cast as AtomicType? ]Expr Section 4.7.4 Editorial/Technical Since a namespace declaration only accepts string literals, the normalization and static semantics judgment seem like overkill. There is no need for the casting to untyped and then string and there is no need to check for the input type (since the parser would have not allowed anything else to go through). Section 4.8.3 Editorial/Technical Please add a sentence what happens if a variable with the name already exists or add a reference to the variable name and scoping rules. Section 4.7.3.4 Technical "The type is zero-or-one, because no text node is constructed if the argument of the text node constructor is the empty string.": Now the empty text node is allowed. Please align with that decision. Section 4.12.3/4 Technical The current normalization of cast as/castable seems to raise a static error if I have an expression E cast as T and E infers to be of static type ET?. This means that most applications of cast as T will lead to static errors. Shouldn't this be a dynamic error? Section 4.1.5 Editorial/Technical "SequenceType <: xs:anySimpleType" -> "SequenceType <: xdt:anyAtomicType*" (no named union types or list types can be sequence types!) Section 4.1.5 Technical "if the expected type is a union of atomic types then this check is performed separately for each possibility": Remove. The expected type cannot be a union of atomic types. Or did you mean the inferred type? General Technical The formal semantics raises both static type errors and dynamic type errors not differentiating that under static typing, no dynamic type errors should occur. This needs to be made explicit. With static typing on, all type errors need to be detected statically (conservative static typing!). Section 4.1.5 Technical What happens if I have an inferred type of a union of atomic (xs:int | xs:double) and I find a function expecting xs:double. The rule doing promotion does not apply, the union rule also does not apply it seems. Please clarify. Also, is xs:integer? treated as an atomic type or a union of xs:integer | empty() ? Section 4.1.1 Editorial/Technical Mention that literal overflows are raised during parsing. MS-FS-LC1-070 Sections 7.2.2.1/7.2.3.1 Editorial Add more whitespace before the inferences statEnv |- axis Axis of none : none statEnv |- axis Axis of empty : empty MS-FS-LC1-075 Section 7.1.4 Editorial Sometimes the type lookup results in just a type name, sometimes it returns "of type Typename". Please be consistent. MS-FS-LC1-077 Section 7.1.10 Editorial Can you add a more complex example that shows restrictions and derivation? MS-FS-LC1-080 Section 7.1.7 Editorial "if the complex type is mixed, interleaves the type with a sequence of text nodes and xdt:anyAtomicType.": Note that the atomic type is or'ed and not part of the interleave adjustment. MS-FS-LC1-081 Section 7.1.9 Editorial/Technical "statEnv |- Type2 is Type1 extended with union interpretation of TypeName statEnv |- Mixed? Type1 adjusts to Type2 ------------------------------------------------------------------------ -------- statEnv |- of type TypeName expands to Type2 " -> Are the number mixed up? MS-FS-LC1-084 Section 7.1.6 Editorial Please add example. MS-FS-LC1-095 Section 7.2.2.1 Editorial Please remove the "improved" version for ancestor axis unless we fix the type inference for the parent axis! MS-FS-LC1-096 Section 7.2.3.1.1 Editorial "statEnv |- QName1 of elem/type expands to expanded-QName statEnv |- QName2 of elem/type expands to expanded-QName": Add subscripts to expanded-QName and add an equality condition of the two expanded Qnames (same for attributes) MS-FS-LC1-097 Section 7.2.3.1.1 Editorial Please add more whitespace between inference rules. The current spacing is unreadable. MS-FS-LC1-098 Section 7.2.3.1.1 Editorial Replace "statEnv |- test * with "element" of element prefix:local TypeSpecifier? : element prefix:local TypeSpecifier?" With "statEnv |- test * with "element" of element QName TypeSpecifier? : element QName TypeSpecifier?" Not every QName has a prefix! (same holds for attributes) MS-FS-LC1-099 Section 7.2.3.1.1 Editorial How do we deal with testing an element name foo against element() ? I would assume that the inferred type is element(foo)? and not element(foo). Same for a against a|b. The result should be a? and not a or raise an error. Same issues with *:b test a:*: should be a:b? and not a:b. Please clarify the use of ? for syntactic purposes and as Occurrence indicator MS-FS-LC1-100 Section 7.2.3.1.1 Editorial With the adoption of element(name) being the same as the name name test, can we simplify the specification by making them map to the same rules? MS-FS-LC1-101 Section 7.2.3.1.2 Editorial "Document kind test" etc: What should be the editorial status of these words? Title? Sentence? MS-FS-LC1-102 Section 7.2.3.1.2 Editorial What should the following inferred type be: Inferred: document(element(A), Kindtest: document(element(*,T)). That seems to be document(element(a,T)?). However, I think it should be document(element(a,T))? MS-FS-LC1-103 Section 7.2.3.1.2 Comment We have not reviewed Element and Attribute kind tests since they will be changed due to the change in SequenceType semantics. MS-FS-LC1-108 Section 7.4 Editorial The quantifier composition | and . (dot) are the same. Just use one of them. Section 7.4 Technical Prime(empty) = none: How can we get none as the prime type? Isn't that leading to errors since none reflects the type of an error? Why not = empty? For example, you say: prime(element a | empty) = element a But isn't this element a | none? And doesn't none propagate (so that we propagate errors?) Section 7.2.3.1.2 Technical " ------------------------------------------------------------------------ -------- statEnv |- test node() with PrincipalNodeKind of NodeType : NodeType If none of the above rules apply, then the node test returns the empty sequence and the following rule applies: statEnv |- test node() with PrincipalNodeKind of NodeType : empty ": Remove the judgment. This is confusing and is in contradiction to the rule before the sentence. As fallback, it also does not cover any of the other kind tests! Section 7.2.2.1 Technical " ------------------------------------------------------------------------ -------- statEnv |- axis ancestor:: of NodeType : element* Note that this rule will always result in the type (element | document)* ": Seems to contradict each other. Please fix the judgment. Section 7.2.2.1 Technical Replace "statEnv |- axis descendant:: of Type1 : Type2 ------------------------------------------------------------------------ -------- statEnv |- axis descendant-or-self:: of Type1 : (prime(Type1) | prime(Type2))* " with "statEnv |- axis descendant:: of Type1 : Type2 ------------------------------------------------------------------------ -------- statEnv |- axis descendant-or-self:: of Type1 : Type1 | Type2 " Do the same for ancestor-or-self. Section 7.2.2.1 Editorial/Technical "statEnv |- ElementType type lookup Nillable? TypeReference Type = (Type1, Type2) statEnv |- Type1 <: attribute * statEnv |- Type2 <: ElementContentType * | xdt:anyAtomicType * ------------------------------------------------------------------------ -------- statEnv |- Type has-attribute-content Type1 " -> Why the type lookup? Section 7.2.2.1 Technical has-attribute-content: Is the result type an interleave of all attributes or a union? Sections 7.1.7/7.1.8 Technical The typed BuiltInAttributes should not be added if the content type is untyped, since the untyped content could contain such a named attribute that is containing a different value. Section 7.1.8 Technical "fs:anyURIlist" -> make it an anonymous type since there is no such named type in XML Schema. Section 7.1.6 Editorial/Technical Since we now use the union expansion, do we still need to add "| xdt:anyAtomicType"? Since we know will get the more precise subtypes anyway... Section 7.1.5/7.1.7 Technical "Type2 extended by (processing-instruction? & comment?)*": Extended by uses "," for adding element content. This clearly is incorrect for adding PI and comment interleaves since they can appear in lots of places. For example, assume that Type2 is ((a|b), c )& d. Then the above would yield ((a|b), c )& d, (processing-instruction? & comment?)* Which is incorrect. It should yield: ((a|b) & (processing-instruction? & comment?)*, c & (processing-instruction? & comment?)*) & (d & (processing-instruction? & comment?)*) Section 7.1.10 Technical The expanded type union should preserve whether the subtypes are mixed or not so the later adjustment will add the text nodes. Section 7.1.6 Technical " ElementContent & text* " does not match: If the Element content is a, b, then (a,b) & text* does not match (a, text, b) even though mixed content allows that. Please make it explicit or use a new mix operator. Section 7.1.10 Technical "statEnv.typeDefn(expanded-QName) => define type TypeNameR,1 restricts TypeName0 Mixed? { TypeR,1? }": There are many rules that look-up different type definitions using the same expanded-QName. This looks like a bug. Section 7.1.7 Editorial/Technical "statEnv |- Type1 mixes to Type2 statEnv |- Type2 extended by BuiltInAttributes is Type3 statEnv |- Type3 & processing-instruction* & comment* is Type4 ": What is the meaning of the last is? Is that Type3 extended by pi and comments? Also see comment on extended by... Section 7.1.4 Technical Replace "statEnv |- ElementName of elem/type expands to expanded-QName statEnv.elemDecl(expanded-QName) undefined ------------------------------------------------------------------------ -------- statEnv |- element ElementName type lookup Nillable? xdt:untyped " with "statEnv |- ElementName of elem/type expands to expanded-QName statEnv.elemDecl(expanded-QName) undefined ------------------------------------------------------------------------ -------- statEnv |- element ElementName type lookup T" where T is determined by the context (was the containing element of type xdt:untyped or xs:anyType?). Depending on resolution of preserve mode on data model, this comment also applies to attribute lookup. Section 7.2.2.1 Technical Can we improve the following rule: Type = Type1, Type2 statEnv |- Type1 <: attribute * statEnv |- Type2 <: xdt:anyAtomicType + ------------------------------------------------------------------------ -------- statEnv |- Type has-node-content text ? to take list types (+ needs to be *) and string types into account? Section 7.2.2.1 Technical Why is it ElementContentType + and not ElementContentType * in judgment has-node-content? Section 7.2.2.1 Technical PI's and comments can occur before, after and inbetween the elements given by the Type of the document. Thus the following rule: ------------------------------------------------------------------------ -------- statEnv |- axis child:: of document { Type } : Type & processing-instructions* & comment* should be: ------------------------------------------------------------------------ -------- statEnv |- axis child:: of document { Type } : (pi? & cmt?)* , interspersepicmt(Type), (processing-instructions? & comment?) Or introduce a new symbol to denote the interspersing semantics instead of misusing &. Hi, the latest XQuery Formal Semantics defines the fs:convert-operand() function in Section 6.1.3 [1]. In a nutshell, fs:convert-operand() casts its first argument to the type of a given sencond argument, if it had been a subtype of xdt:untypedAtomic before. Otherwise the first argument is returned unchanged. The type of fs:convert-operand(), however, is defined as fs:convert-operand($actual as item *, $expected as xdt:anyAtomicType) as xdt:anyAtomicType ? . Note that it allows an arbitrarily long sequence of items as its first argument. The return value, however, is a sequence of at most length one. The specifications only consider the case where $actual has a length no longer than one. So the return value for $actual being a longer sequence remains undefined. fs:convert-operand() could easily be fixed by restricting $actual to an optional item (item?). This, however, would make queries such as XMark 11 [2] illegal: for $p in fn:doc("auction.xml")/site/people/person return let $l := for $i in document("auction.xml")/site/open_auctions/open_auction/initial where $p/profile/@income > (5000 * $i/text()) return $i return element items { attribute name { $p/name/text() } text { count ($l) } } Note the `5000 * $i/text()'. $i/text() evaluates to node*. The Formal Semantics rule for Arithmetics [3] applies fn:data(), returning an xdt:untypedAtomic* on the non-validated document. If the first argument of fs:convert-operand() were restricted to item? it could not be applied to `$i/text()', making the above query illegal. Regards Jens Teubner [1] [2] [3] -- Jens Teubner University of Konstanz, Department of Computer and Information Science D-78457 Konstanz, Germany Tel: +49 7531 88-4379 Fax: +49 7531 88-3577 Statistics show that most people are in the majority, while a few are in the minority. -- Nitin Borwankar
http://www.w3.org/2005/02/formal-semantics-issues.html
crawl-002
refinedweb
19,066
56.96
Details Description Notes: * running test method "testMultipleEvals" (single threaded case) always succeeds * running test method "testMultiThreadMultipleEvals" always fails * commenting out the allow.inline.local.scope line makes the multithread test pass (but of course has other side-effects) Interestingly, for the multithread case it seems that 1 thread always succeeds and N-1 threads fail. Issue Links - is duplicated by VELOCITY-796 Velocity #parse not parsing content. - Resolved VELOCITY-811 Concurrency problems with rendering macros - Resolved - is superceded by VELOCITY-797 Inline macros need to be kept with their Template, not centrally managed - Open Activity Simon, does this only occur via Velocity.evaluate(...)? Or was this happening with standard template retrieval/rendering too? I ask because the VelocimacroManager uses the "log tag" parameter given to evaluate(...) as the local namespace key. If i alter your test to iterate the "log tag", then the namespace collisions disappear (of course), and your test passes. Granted, i don't yet understand exactly why the VelocimacroManager fails to properly handle multi-threaded addition of macros (the same macro, even) to the same namespace (i haven't had my nose deep in this code in a while). Still, that appears to be the catching point at the moment. No solution yet, but i can replicate it via Velocity.mergeTemplate, by turning caching off on the resource loader (StringResourceLoader in this case). Of course, that's effectively the same as using evaluate(...). Using a resource loader and turning caching on fixes the problem. Ok, RuntimeInstance dumps the vm namespace for a template before parsing it. It does this under the assumption that if you are reparsing the template, it must have changed. If it changed, we shouldn't keep local macros around, as they may no longer exist in the new version. This still seems to me that correct behavior. When rapidly (re)parsing the same template in multiple threads, this means that thread B may dump the namespace newly created and not yet used in rendering by thread A. This unfortunate event can be avoided by permanently caching templates, or greatly reduced in likelihood by caching them with an infrequent modificationCheckInterval. Though, even there it is possible, i think. So, here's the choices i've thought of thus far: 1) leave it and add disclaimers about inline macros when templates aren't permanently cached 2) we can flip the bit and risk memory leakage (and potential interference) from stale macros 3) add another config switch to let users choose between #1 and #2 4) synchronize creatively to prevent simultaneous parsing and/or rendering in the same namespace 5) try to find a way to refactor so inline macros are kept with the template, not managed centrally #5 is more than i can take on right now, and probably won't work for the 1.x branch. maybe in 2.x #4 would probably both wreck performance and never quite work right #3 is tiresome, but probably the best current option #2 would probably only be terrible for people with users creating templates, but could be quite the memory leak for them #1is arguably acceptable for 1.7, but is not a long-term solution thoughts? My bug 811 was marked as a duplicate of this. For me this happens in the context of including macro modules to override some default macros. See example below. When the top level template.vtl is merged concurrently using the same engine instance the effects described above do happen. ----- template.vtl ---- #parse(macros.vtl) #myMacro("param") ---------------------------- ------ macros.vtl ------ #macro(myMacro $param) #end ----------------------------- The system I am working on uses macro overloading a lot ( as a poor man's subclassing to keep the things sane ) what really would be nice to have a capability to include and exclude VM libraries dynamically for overloading purposes. I have modified the code in the RuntimeInstance.parse(reader, templateName) to pass false as the "dump" parameter and it seems to fix the issue for now. In my system the VTL files are not changed all that often and when they do macros are not typically removed, so if the parse would retain some unused macros in the cache until the next JVM restart it's not a big deal. It would be a big deal if the parse does not refresh the existing macros on parse though. I still think that the synchronization in the VelocimacroManager between getNamespace, addNamespace and dumpNamespace is broken and just presents a smaller window for the race condition than the one between the dump and parse. We are experiencing the same issue with 1.7.0 on linux. This, however, has never happened on Windows. Example code and associated junit test that demonstrate this issue
https://issues.apache.org/jira/browse/VELOCITY-776?focusedCommentId=12905340&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-10
refinedweb
786
61.46
28 The avr-g++ C++ compiler that arduino uses does need forward declaration. To make it easier for beginners the IDE does auto generate forward declaration code and puts it into your C++ code before sending it to the avr-g++ C++ compiler. Dr.Smart Thanks for that insight Dr.Smart – I did not think about that. Good to know Dear Hans, I have only just discovered your wonderful introductory Arduino website. Of the many introductions to Arduino I have looked at, yours is by far the clearest and best explained. Many thanks for taking the trouble to think through the various key ideas so systematically and to present them in such an accessible way. I particularly appreciate your section on functions. It led me to consider different ways that functions can be used, especially the distinction between (i) functions that do not return a result (where the datatype is ‘void’) and (ii) functions that do return a result (where the data type is something other than ‘void’, e.g. ‘int’). I will now try to explain what seems to me to be important implications of this distinction for those writing Arduino sketches. My understanding is that in both these cases, the function is called when needed (such as from the main loop of the sketch). However, only in the case of the return scenario does the calling part of the sketch become ‘aware’ of the consequences of the called function (that is, of the output of the called function). The subsequent part of the sketch (after the return from the function) can use the value returned in the processing carried out by its further lines of code. In contrast, where the data type is ‘void’, the called function causes something to take place external to the sketch (such as a LED being turned on) so the rest of the sketch remains unaware of this. consequently its subsequent processing activity is unaffected by the function having been called. So, a function that has a return (and the necessary parameters to allow it to return something after having been called) is the ‘servant’ of the main loop but non-return (I.e., ‘void’ ) functions do not directly serve the main loop. Rather, the void type of function typically serves purposes such as providing output to components attached to the Arduino. Does this make sense or have I not understood things properly? I would very much appreciate your reactions and clarification on this. Regards RicRic Ric Hi Ric, thank you for the very kind words and compliments! It’s very much appreciated! Your view on functions does make sense, and it looks like you got the gist of it just fine. The only things I’d like to add is this: 1) Functions can call other functions and technically loop() and setup() are function on their own. To illustrate this, a function can call yet another function, or even itself (recursion). 2) “Awareness” of things that happen in a function would suggest that anything changed or done in a (void) function would not affect the “function” where the function was called from. But this is not always the case, for example with global variables. To example 2) better, say we have a function There is no return, and “a” is not defined. So this would cause an error, unless we define a global variable; The function “AddTwo” will not return a value (void), but will change the global variable “a”. So each time AddTwo is called, which would be a lot in this example, the global variable “a” will increase by 2 (until it hits the max capacity of the variable type “int”). Hope this explains it a little more in depth hans Dear Hans, Thanks very much for your speedy and most helpful reply. The two additional aspects you explained nicely expand on the possible roles that functions can play in Arduino sketches. It is interesting and useful to think about a hierarchical organization of processing in which a higher order function calls other ‘subservient’ functions. Now I must go back and have another look at your section on function recursion! I can also see from your reply that ‘scope’ is a really important concept to be kept in mind. Cheers Ric Ric Hi Ric! Indeed, well spotted: “scope” plays an important role as well. Note: using global variables is something you’d want to do sparingly – the more global variables you have, the more confusing it can get when same or similar named variables are used in functions. The good thing is that Arduino Sketches are typically rather simple. But if you later switch to different systems, code can become huge and confusion will kick in hans Do you walk your dog in the “part” or in the “park”? I think you have a previously undiscovered typo. C2carey Hi C2carey! Yeah, that is a pretty good blunder right there hahaha – Thanks for catching that! I’ve update the text … hans Hi! I’d just like to ask if you have this tutorial in PDF form. This is so good that I’d love to keep it in my laptop or phone for offline use. Though don’t feel obligated to do so, I just would be interested. Thank you! Blade Hi Blade, I totally understand the desire to have printed material, even if it’s just to lay next to you while working on something and/or make notes withe the text (a reason why I prefer text over a YouTube video). I have implemented a specific make up for when the page is being printed. So just print the page to a PDF. This is supported under macOS and Windows 10 natively – in the printer dialog you’ll have to select a PDF printer or select “Save as PDF” (depending on the Operating System and/or browser you’re using). Note: The comments will be included as well. To exclude those, scroll through the preview and determine the page where comments start. Then in the printer dialog, set a custom number of pages. I just tried it with this page and it showed 34 (!) pages, but the comments (always forced to start on a new page) start at page 18 in my browser (this may be different on your system), So to create a PDF without the comments, I set the page range to “1-17”, which results in a clean PDF document. Please let me know if you run into any issues. I’ve only tested a few pages with this setup, so any feedback on the results is much appreciated. Hans
https://www.tweaking4all.com/hardware/arduino/arduino-programming-course/arduino-programming-part-6/
CC-MAIN-2021-31
refinedweb
1,104
69.41
This is the second expose data from your server using the MVC 6 Web API. We’ll retrieve a list of movies from the Web API and display the list of movies in our AngularJS app by taking advantage of an AngularJS template. Enabling ASP.NET MVC The first step is to enable MVC for our application. There are two files that we need to modify to enable MVC. First, we need to modify the project.json file so it includes MVC 6: { "webroot": "wwwroot", "version": "1.0.0-*", "exclude": [ "wwwroot" ], "packExclude": [ "**.kproj", "**.user", "**.vspscc" ], "dependencies": { "Microsoft.AspNet.Server.IIS": "1.0.0-beta1", "Microsoft.AspNet.Mvc": "6.0.0-beta1" }, "frameworks" : { "aspnet50" : { }, "aspnetcore50": { } } } The project.json file is used by the NuGet package manager to determine the packages required by the project. We’ve indicated that we need the MVC 6 (beta1) package. We also need to modify the Startup.cs file in the root of our MovieAngularJS project. Change the contents of the Startup.cs file so it looks like this: using System; using Microsoft.AspNet.Builder; using Microsoft.AspNet.Http; using Microsoft.Framework.DependencyInjection; namespace MovieAngularJSApp { public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddMvc(); } public void Configure(IApplicationBuilder app) { app.UseMvc(); } } } The ConfigureServices() method is used to register MVC with the built-in Dependency Injection framework included in ASP.NET 5. The Configure() method is used to register MVC with OWIN. Creating the Movie Model Let’s create a Movie model class that we can use to pass movies from the server to the browser (from the Web API to AngularJS). Create a Models folder in the root of the MovieAngularJS project: Notice that you create the Models folder outside of the wwwroot folder. Source code does not belong in wwwroot. Add the following C# class named Movie.cs to the Models folder: using System; namespace MovieAngularJSApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Creating the Web API Controller Unlike earlier versions of ASP.NET, the same controller base class is used for both MVC controllers and Web API controllers. Because we’ve pulled in the NuGet package for MVC 6, we are now ready to start creating Web API controllers. Add an API folder to the root of your MovieAngularJS project: Notice that you don’t add the API folder inside of your wwroot folder. All of your source code – including source code for your controllers – should be located outside of the wwwroot folder. Next, add a Web API controller by right-clicking the API folder and selecting Add, New Item. Choose the Web API Controller Class template and name the new controller MoviesController.cs: Enter the following code for the Web API controller: using System; using System.Collections.Generic; using System.Linq; using Microsoft.AspNet.Mvc; using MovieAngularJSApp.Models; namespace MovieAngularJSApp.API.Controllers { [Route("api/[controller]")] public class MoviesController : Controller { // GET: api/values [HttpGet] public IEnumerable<Movie> Get() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Memento", Director="Nolan"} }; } } } In the code above, the Get() action returns a list of movies. You can test whether the action is working by starting your app and navigating to /api/movies in your browser. In Google Chrome, you’ll get an XML representation of the movies: Creating the AngularJS App We are going to display the list of movies using an AngularJS template. First, we need to create our AngularJS app. Right-click on the Scripts folder and select Add, New Item. Select the AngularJS Module template and click the Add button. Enter the following code for the new AngularJS module: (function () { 'use strict'; angular.module('moviesApp', [ 'moviesServices' ]); })(); The code above defines a new AngularJS module named moviesApp. The moviesApp has a dependency on another AngularJS module named moviesServices. We create the moviesServices below. Creating the AngularJS Controller Our next step is to create a client-side AngularJS controller. Create a new Controllers folder under the Scripts folder: Right-click the Controller folder and select Add, New Item. Add a new AngularJS Controller using $scope named moviesController.js to the Controllers folder. Enter the following content for the moviesController.js file: (function () { 'use strict'; angular .module('moviesApp') .controller('moviesController', moviesController); moviesController.$inject = ['$scope', 'Movies']; function moviesController($scope, Movies) { $scope.movies = Movies.query(); } })(); The AngularJS controller above depends on a Movies service that supplies the list of movies. The Movies service is passed to the controller using dependency injection. The Movies service is passed as the second parameter to the moviesController() function. The moviesController.$inject() method call is required to enable the moviesController to work with minification. AngularJS dependency injection works off the name of parameters. In the previous blog post, I setup Grunt and UglifyJS to minify all of the JavaScript files. As part of the process of minification, all function parameter names are shortened (mangled). The $inject() method call enables dependency injection to work even when the parameter names are mangled. Creating the AngularJS Movies Service We’ll use an AngularJS Movies service to interact with the Web API. Add a new Services folder to the existing Scripts folder. Next, right-click the Services folder and select Add, New Item. Add a new AngularJS Factory named moviesService.js to the Services folder: Enter the following code for moviesService.js: (function () { 'use strict'; var moviesServices = angular.module('moviesServices', ['ngResource']); moviesServices.factory('Movies', ['$resource', function ($resource) { return $resource('/api/movies/', {}, { query: { method: 'GET', params: {}, isArray: true } }); }]); })(); The movieServices depends on the $resource object. The $resource object performs Ajax requests using a RESTful pattern. In the code above, the moviesServices is associated with the /api/movies/ route on the server. In other words, when you perform a query against the moviesServices in your client code then the Web API MoviesController is invoked to return a list of movies. Creating the AngularJS Template The final step is to create the AngularJS template that displays the list of movies. Right-click the wwwroot folder and add a new HTML page named index.html. Modify the contents of index.html so it looks like this: <!DOCTYPE html> <html ng- <head> <meta charset="utf-8" /> <title>Movies</title> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.8/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.8/angular-resource.js"></script> <script src="app.js"></script> </head> <body ng-cloak> <div ng- <table> <thead> <tr> <th>Title</th> <th>Director</th> </tr> </thead> <tbody> <tr ng- <td>{{movie.Title}}</td> <td>{{movie.Director}}</td> </tr> </tbody> </table> </div> </body> </html> There are several things that you should notice about this HTML file: Notice that the <html> element includes an ng-app directive. This directive associates the moviesApp with the HTML file. Notice that <script> tags are used to add the angular and angular-resource JavaScript libraries from the Google CDN. Notice that the <body> element includes an ng-controller directive. This directive associates the moviesController with the contents of the <div> element. Notice that the movies are displayed by using an ng-repeat directive. The title and the director are displayed for each movie. Notice that the <body> element includes an ng-cloak directive. The ng-cloak directive hides an AngularJS template until the data has been loaded and the template has been rendered. When you open the index.html page in your browser, you can see the list of movies displayed in the HTML table. Summary In this blog post, we created an AngularJS app that calls a Web API action to retrieve a list of movies. We displayed the list of movies in an HTML table by taking advantage of an AngularJS template. In the next blog post, I discuss how you can take advantage of AngularJS routing to divide a client-side AngularJS app into multiple virtual pages. Hi Stephen First of all thanks for making this great series. I have found it very useful to learn about project.json and startup file, angular $inject() and more. The Net is still lacking articles that explains these concepts, so its very to see that authors like you have begun to make articles dealing with vNext. I hope you will continue to write more articles dealing with vNext, maybe you could just continue to add new feature to this movie application. Thanks again Mohammad What is the recommended way to switch between including the debug / minified versions of your JavaScript in your html document?
https://stephenwalther.com/archive/2015/01/13/asp-net-5-and-angularjs-part-2-using-the-mvc-6-web-api
CC-MAIN-2021-31
refinedweb
1,431
58.79
System.Taffybar Description The main module of Taffybar Synopsis - data TaffybarConfig = TaffybarConfig { - screenNumber :: Int - monitorNumber :: Int - barHeight :: Int - errorMsg :: Maybe String - startWidgets :: [IO Widget] - endWidgets :: [IO Widget] - defaultTaffybar :: TaffybarConfig -> IO () - defaultTaffybarConfig :: TaffybarConfig Detail This is a system status bar meant for use with window manager like XMonad. It is similar to xmobar, but with more visual flare and a different widget set. Contributed widgets are more than welcome. The bar is drawn using gtk and cairo. It is actually the simplest possible thing that could plausibly work: you give Taffybar a list of GTK widgets and it will render them in a horizontal bar for you (taking care of ugly details like reserving strut space so that window managers don't put windows over it). This is the real main module. The default bar should be customized to taste in the config file (~.configtaffybar/taffybar.hs). Typically, this means adding widgets to the default config. A default configuration file is included in the distribution, but the essentials are covered here. Config File The config file is just a Haskell source file that is compiled at startup (if it has changed) to produce a custom executable with the desired set of widgets. You will want to import this module along with the modules of any widgets you want to add to the bar. Note, you can define any widgets that you want in your config file or other libraries. Taffybar only cares that you give it some GTK widgets to display. Below is a fairly typical example: import System.Taffybar import System.Taffybar.Systray import System.Taffybar.XMonadLog import System.Taffybar.SimpleClock import System.Taffybar.Widgets.PollingGraph import System.Information.CPU cpuCallback = do (_, systemLoad, totalLoad) <- cpuLoad return [ totalLoad, systemLoad ] main = do let cpuCfg = defaultGraphConfig { graphDataColors = [ (0, 1, 0, 1), (1, 0, 1, 0.5)] , graphLabel = Just "cpu" } clock = textClockNew Nothing "<span fgcolor='orange'>%a %b %_d %H:%M</span>" 1 log = xmonadLogNew tray = systrayNew cpu = pollingGraphNew cpuCfg 0.5 cpuCallback defaultTaffybar defaultTaffybarConfig { startWidgets = [ log ] , endWidgets = [ tray, clock, cpu ] } This configuration creates a bar with four widgets. On the left is the XMonad log. The rightmost widget is the system tray, with a clock and then a CPU graph. The clock is formatted using standard strftime-style format strings (see the clock module). Note that the clock is colored using Pango markup (again, see the clock module). The CPU widget plots two graphs on the same widget: total CPU use in green and then system CPU use in a kind of semi-transparent purple on top of the green. It is important to note that the widget lists are *not* [Widget]. They are actually [IO Widget] since the bar needs to construct them after performing some GTK initialization. XMonad Integration (via DBus) The XMonadLog widget differs from its counterpart in xmobar: it listens for updates over DBus instead of reading from stdin. This makes it easy to restart Taffybar independently of XMonad. XMonad does not come with a DBus logger, so here is an example of how to make it work. Note: this requires the dbus-core (>0.9) package, which is installed as a dependency of Taffybar. import XMonad.Hooks.DynamicLog import DBus.Client.Simple import System.Taffybar.XMonadLog ( dbusLog ) main = do client <- connectSession let pp = defaultPP xmonad defaultConfig { logHook = dbusLog client pp } The complexity is handled in the System.Tafftbar.XMonadLog module. A note about DBus: - If you start xmonad using a graphical login manager like gdm or kdm, DBus should be started automatically for you. - If you start xmonad with a different graphical login manager that does not start DBus for you automatically, put the line eval `dbus-launch --auto-syntax`into your ~/.xsession *before* xmonad and taffybar are started. This command sets some environment variables that the two must agree on. - If you start xmonad via startxor a similar command, add the above command to ~/.xinitrc Colors While taffybar is based on GTK+, it ignores your GTK+ theme. The default theme that it uses is in ~/.cabal/share/taffybar-<version>/taffybar.rc. You can customize this theme by copying it to ~/.config/taffybar/taffybar.rc. For an idea of the customizations you can make, see. data TaffybarConfig Constructors defaultTaffybar :: TaffybarConfig -> IO () The entry point of the application. Feed it a custom config. defaultTaffybarConfig :: TaffybarConfig The default configuration gives an empty bar 25 pixels high on monitor 0.
http://pages.cs.wisc.edu/~travitch/taffybar/System-Taffybar.html
CC-MAIN-2016-30
refinedweb
730
57.27
Created on 2010-12-28 06:57 by cooyeah, last changed 2013-06-28 15:46 by sylvain.corlay. This issue is now closed. The constructor of TextTestRunner class looks like: def __init__(self, stream=sys.stderr, descriptions=1, verbosity=1) Since the default parameter is evaluated only once, if sys.stderr is redirected later, the test would still use the old stderr when it was first initialized. TextTestRunner is initialised with a stream to output messages on that defaults to sys.stderr. The correct way to redirect messages is to construct it with a different stream. If you want a redirectable stream then construct the runner with a stream that delegates operations to a 'backend stream' but allows you to redirect it. Fixing TextTestRunner to dynamically lookup up sys.stderr would not be compatible with systems that redirect sys.stderr but *don't* expect this to prevent test run information from being output. I suggest closing as wontfix. Actually I can't see a good reason why not to just lookup the *current* sys.stderr at instantiation time instead of binding at import time as is the current behaviour. Patch with tests will make it more likely that this change goes in sooner rather than later. All patches change the default value of stream to None in the constructor, and set it to the current to sys.stderr if the argument is None. Unit tests included to check this behavior. Also, the patch against Python 3.1 adds the Test_TextTestRunner test case to the list of tests to be run. Apparently this test case was not being run. Committed to py3k in revision 87582. Since the current behavior matches the current doc, "class unittest.TextTestRunner(stream=sys.stderr, descriptions=True, verbosity=1, runnerclass=None, warnings=None) A basic test runner implementation which prints results on standard error. ..." this is a feature change request, not a bug report. Hence, the change should not be backported, lest it surprise someone. One could even question whether the change should be introduced now, but is does seem rather minor. That aside, the doc needs to be changed and a version-changed note added. Something like "class unittest.TextTestRunner(stream=None, descriptions=True, verbosity=1, runnerclass=None, warnings=None) A basic test runner implementation. If *stream* is the default None, results go to standard error. ... Version changed 3.2: default stream determined when class is instantiated rather than when imported." Thanks Terry. Done. Doc changes committed revision 87679. Hello, It would be great if this modification was also done for Python 2.7. A reason is that IPython redirects stderr. When running unit tests in the IPython console without specifying the stream argument, the errors are printed in the shell. See
http://bugs.python.org/issue10786
CC-MAIN-2015-40
refinedweb
457
68.47
Subresultant algorithm taking a lot of time for higher degree univariate polynomials with coefficients from fraction fields I have to compute the gcd of univariate polynomials over the fraction field of $\mathbb{Z}[x,y]$. I wanted to use the subresultant algorithm already implemented for UFDs. I copied the same function to fraction_field.py. The subresultant algorithm calls the psuedo division algorithm which has the following step : R = dR - cB.shift(diffdeg) - this hangs when we consider random polynomials of degree >6 in $Frac(\mathbb{Z}[x,y])[z]$. (Note: In the current version of sage it uses the regular Euclidean algorithm implemented in rings.py for computing gcd in this case. It is much slower than the subresultant algorithm (hangs for degrees >4) which is why I thought the subresultant algorithm will improve things.) Sample input: sage: A.<x,y>=ZZ[] sage: B= Frac(A) sage: C.<z> = B[] sage: p = C.random_element(6) sage: q = C.random_element(6) sage: gcd(p,q) The following function is what I copied into fraction_field.py from unique_factorisation_domain.py. def _gcd_univariate_polynomial(self, f, g): if f.degree() < g.degree(): A,B = g, f else: A,B = f, g if B.is_zero(): return A a = b = self.zero() for c in A.coefficients(): a = a.gcd(c) if a.is_one(): break for c in B.coefficients(): b = b.gcd(c) if b.is_one(): break d = a.gcd(b) #d=1 A = A // a B = B // b g = h = 1 delta = A.degree()-B.degree() _,R = A.pseudo_quo_rem(B) while R.degree() > 0: A = B B = R // (g*h**delta) g = A.leading_coefficient() h = h*g**delta // h**delta delta = A.degree() - B.degree() _, R = A.pseudo_quo_rem(B) # print("i am here") if R.is_zero(): b = self.zero() for c in B.coefficients(): b = b.gcd(c) if b.is_one(): break return d*B // b return d This calls the following pseudo quo remainder function in polynomial_element.pyx. It is in this function I was able to see that it hangs at R = dR - cB.shift(diffdeg). def pseudo_quo_rem(self,other): if other.is_zero(): raise ZeroDivisionError("Pseudo-division by zero is not possible") # if other is a constant, then R = 0 and Q = self * other^(deg(self)) if other in self.parent().base_ring(): return (self * other**(self.degree()), self._parent.zero()) R = self B = other Q = self._parent.zero() e = self.degree() - other.degree() + 1 d = B.leading_coefficient() while not R.degree() < B.degree(): c = R.leading_coefficient() diffdeg = R.degree() - B.degree() Q = d*Q + self.parent()(c).shift(diffdeg) R = d*R - c*B.shift(diffdeg) e -= 1 q = d**e return (q*Q,q*R) It is really hard to reconstruct the error. (Without also copying the obvious same function to fraction_field, and guessing the code that builds the fraction fields, and knowing the meaning of d,R, cB, shift, diffdeg after an fgrep of potential modules, and searching / finding an explicit example that reproduces the same problem.) It would be simpler for a potential helper if the question already isolates a minimal example with simple polynomials also leading to the same error, and the code would be welcome... yes I understand that. I will edit the question with more details. I am the author of these lines. I don't have time to have a careful look right now, but I'll do that next week. thank you! that will be great!
https://ask.sagemath.org/question/37672/subresultant-algorithm-taking-a-lot-of-time-for-higher-degree-univariate-polynomials-with-coefficients-from-fraction-fields/
CC-MAIN-2018-17
refinedweb
573
63.05
For design reasons the OpenGL specification was isolated from any window system dependencies. The resulting interface is a portable, streamlined and efficient 2D and 3D rendering library. It is up to the native window system to open and render windows. The OpenGL library communicates with the native system through additional auxiliary libraries. For example, the GLX auxiliary library describes the interaction between OpenGL and the X window System. The OpenGL Utility Toolkit (GLUT) is a programming interface with ANSI C and FORTRAN bindings for writing window system independent OpenGL programs. It was written by Mark J. Kilgard and covers a great hole left by the OpenGL specification. Thanks to GLUT developers we can use a common window system interface independently of the target platform. OpenGL applications using GLUT can be easily ported between platforms without having to introduce numerous changes to the source code. GLUT definitely simplifies the production of OpenGL code and it complements the OpenGL library. GLUT is relatively small and easy to learn. It is well designed and in fact, its author has already written wonderful documentation for it. Therefore starting a series of articles here in LinuxFocus seems redundant. We encourage any serious developer to read Mark's documentation. Our purpose for writing this regular GLUT column is to introduce the GLUT library and its usage step by step with examples as a companion reading with the OpenGL series of this magazine. We hope this will make a useful contribution and motivate more programmers to join the OpenGL-linux wagon. In any case, get your own copy of Mark's documentation as a good reference. The GLUT API is a state machine like OpenGL. This means that GLUT has a number of state variables that live during the execution of the application. The initial states of the GLUT machine has been reasonably chosen to fit most applications. The program can modify the values of the state variables as it sees fit. Whenever a GLUT function is invoked its action is modified according to the values of the state variables. GLUT functions are simple, they take few parameters. No pointers are returned and the only pointers passed to GLUT functions are pointers to character strings and opaque font handles. GLUT functions can be classified into several sub-APIs according to their functionality: Every OpenGL program using GLUT must begin by initializing the GLUT state machine. The glut initialization functions are prefixed by glutInit-. The main initialization routine is glutInit: Usage: glutInit(int **argcp, char **argv); argcp is a pointer to the program's unmodified argc variable from main. Upon return, the value pointed to by argcp is updated because glutInit extracts any command line options relevant for the GLUT library, for example: under the X Window System environment, any options relevant for the X window associated to the GLUT window. argv is the program's unmodified argv variable for main. glutInit takes care of initializing the GLUT state variables and negotiating a session with the window system. There are a few routines that could appear before glutInit; only routines prefixed by glutInit-. These routines can be used to set the default window initialization state. For example: Usage: glutInitWindowPosition(int x, int **y); glutInitWindowSize(int width, int **height); x,y = screen position in pixels of the window (upper left corner) width,height in pixels of the window. There is another initialization routine omni-present in every OpenGL application, glutInitDisplayMode(): Usage: glutInitDisplayMode(unsigned int mode); mode is the Display mode, a bitwise OR-ing of GLUT display mode bit masks. The possible bitmask values are: #include <GL/glut.h> void main(int argcp, char **argv){ /* Set window size and location */ glutInitWindowSize(640, 480); glutInitWindowPosition(0, 0); /* Select type of Display mode: Single buffer & RGBA color */ glutInitDisplayMode(GLUT_RGBA | GLUT_SINGLE); /* Initialize GLUT state */ glutInit(&argcp, argv); .....more code }; Second an example of an animation program: /* Select type of Display mode: Double buffer & RGBA color */ glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); We will come back to these two examples as we continue to learn more about GLUT. The main difference is that in the second case the display is initialized in a double buffer mode, ideal for animations because it eliminates flickering effects while changing frames in the animation sequence. As mentioned before, GLUT is a state machine. Now we will learn it is also designed as an event driven engine. This means that there is a "timer" or continuous loop that gets started after the proper initializations and that processes, one by one, all the events declared to GLUT during initialization. Events are: a mouse being clicked, a window closed, a window reshape, a cursor moved, keyboard keys pressed, and even more curiously the "idle" event, i.e. nothing happens! Each one of the possible events must be registered in one of the GLUT state variables for the "timer" or event processing loop of GLUT to periodically check whether that event has been triggered by the user. For example, we could register "click mouse button" as an event for GLUT to watch out for. Events are registered through callback registration routines. All have the syntax glut[someEvent]Func, in the case of the mouse clicking it would be glutMouseFunc. A callback registration tells the GLUT engine which user-defined function is to be called if the corresponding event is triggered. So, if I write my own routine MyMouse which specifies what to do if the left mouse button is clicked, (or the right, etc.) then I can register my callback function after the glutInit() in main() using the statement "glutMouseFunc(MyMouse);" . Let us leave for later which callback functions and events are permitted in GLUT. The important thing now is that after all the important events in our application have been registered we must invoke the event processing routine of GLUT, namely glutMainLoop(). The function never comes back, our program basically enters an infinite loop. It will call as necessary any callbacks that have been previously registered. Every main() for an OpenGL application must then end in a glutMainLoop() statement. So in the case of our animation template: /* Initialize GLUT state */ glutInit(&argcp, argv); glutInitWindowSize(640, 480); glutInitWindowPosition(0, 0); /* Open a window */ glutCreateWindow("My OpenGL Application"); /* Select type of Display mode: Double buffer & RGBA color */ glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); /* Register Callback Functions */ ..... /* Start Event Processing Engine */ glutMainLoop(); }; Notice I have added some extra code we never mentioned before. It is one of GLUT's window management routines, glutCreateWindow(char **name). This is what I like so much about OpenGL & GLUT design philosophy, it is pretty clear what the routine does by just looking at the name!. It also takes care of actually passing the order to the underlying window system to open a window for our OpenGL application. The window will have the name "name" passed as a character string. In the X Window Environment this name is written on the upper left corner of the window. The window management section of GLUT has many other functions that we will eventually have a look at. For now, this one is sufficient. I have also rearranged the initialization routines to show that they can be placed after glutInit(). Back to events... I want now to introduce two callback registration functions that are very fundamental in any animation program. The glutDisplayFunc which sets the display function for the current window and the glutIdleFunc which sets the idle callback. Both registration routines expect a function of type void *(void). Say we write two additional callback functions to our animation template, void MyDisplay(void) which takes care of invoking the OpenGL instructions that actually draw our scene onto the window, and void MyIdle(void) which is a function that gets called whenever there is no other user input, that is, each time the event processing machine of GLUT goes once around the infinite loop (glutMainLoop()) and does not find any new event triggered, it processes MyIdle. Why do I need to register an Idle callback function in an animation program? Because if we wish to modify each one of the images (frames) shown during the animation independently of any user input, there has to be a function (the idle callback function) that gets called every so often during the life of the OpenGL program and changes the frames before they get drawn by Mydisplay(). Finally here is a simple template for an animation program: #include <GL/glut.h> void MyIdle(void){ /* Some code to modify the variables defining next frame */ .... }; void MyDisplay(void){ /* Some OpenGL code that draws a frame */ .... /* After drawing the frame we swap the buffers */ glutSwapBuffers(); }; /* Select type of Display mode: Double buffer & RGBA color */ glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); /* Register Callback Functions */ glutDisplayFunc(MyDisplay) glutIdleFunc(MyIdle) Notice that at the end of MyDisplay I have added a new GLUT routine, glutSwapBuffers(). This is very useful in animations. We are using a window in DOUBLE buffer mode, one shown and one hidden. The drawing OpenGL instructions in this case always render into the hidden buffer. The glutSwapBuffers call, exchanges the buffers, showing in the window at once what was drawn. This technique is common in computer animations because it prevents the human eye from seeing the frame being constructed line by line. There is already enough material to start writing OpenGL applications. The only things missing are the OpenGL instructions in MyDisplay that actually do the drawing...but that is another story ;-). In the next article on GLUT programming we will explore in more depth the functionality available to us in the Window Management section of GLUT, and how to open multiple scenes inside the same window. We will also learn about using menus, including the pros and cons for their portability.
http://www.linuxfocus.org/Portugues/January1998/article16.html
CC-MAIN-2016-44
refinedweb
1,615
52.9
On Sat, 2 May 2009, twisted-web-request at twistedmatrix.com wrote: > Hi Dave! Thanks for taking the time to contribute. > > Unfortunately, this contribution will almost certainly be lost if you > don't open a ticket for it at <>. > Once you've done that, you'll need to mark it for review by adding the > "review" keyword and reassigning it to nobody - the empty entry at the > top of the "reassign" menu. Thanks for your thoughtful comments! I created a ticket with a patch and unit test and tried to respond to all of the things you mentioned. The ticket is here: >> I had to add to the XMLRPCIntrospection class in order to include my >> method in the system namespace, even though it is not really an >> introspection method. It's also possible to subclass >> XMLRPCIntrospection >> from outside of xmlrpc.py and use putSubHandler instead of >> xmlrpc.addIntrospection to create the "system" namespace. > > I'm not quite sure what you're getting at here - do you think one of > these other approaches would be better! Well, I'm sort of thinking out loud here, because I'm not sure. As designed, the way to get a "system" namespace in your XML-RPC server is to call xmlrpc.addIntrospection(server). However, the "system.multicall" method belongs to the "system" namespace and is not an introspection method. If I were to add another method called xmlrpc.addMulticall(), the two methods would both be responsible for creating the system namespace. Would the multicall method go in its own class? Or would it be monkey-patched onto the XMLRPCIntrospection class? The latter would avoid breaking existing code, but it would also create an order of operations requirement (first introspection, then multicall), which would be inelegant. For comparison, this is how SimpleXMLRPCServer does it: > ... join us in the #twisted IRC channel on chat.freenode.net. I'll be around as "ramenboy" - maybe I'll see you around. Thanks, Dave -- .. Dave Benjamin - Software Developer - Phoenix, Arizona .. |
http://twistedmatrix.com/pipermail/twisted-web/2009-May/004171.html
CC-MAIN-2014-42
refinedweb
330
58.99
Can u GIve any Example where it may fails? what i thought if S=p2 + p3 if there is any way to split S into p1 + p4 then it will validate the conditions Can u GIve any Example where it may fails? what i thought if S=p2 + p3 if there is any way to split S into p1 + p4 then it will validate the conditions It is correct. You are still doing the same thing but in a different way. I am trying to solve this question using KMP algorithm but I am getting runtime error. Can any one help me to locate where am I committing the mistake? Here’s my code: thanks in advance!! i have used hashing here .when i submit my solution it is giving me wrong answer Thanks @rishup_nitdgp for such a good question and also @akashbhalotia for such a great editorial !! My unnecessary story for the question : I finally solved this problem and I believe that I have got a basic understanding of string hashing. I still am having a few doubts regarding this topic - string hashing. Firstly I was checking for collisions in my code so I got TLE. So my hope for solving this question got smashed . So I started looking for other people’s code and then I realised that they weren’t checking for collisions, so I commented that part in my code and it got accepted. So I started looking for other people’s code and then I realised that they weren’t checking for collisions, so I commented that part in my code and it got accepted .. So basically I want to know whether or not should we check for collision and if yes then when should we and how can one make checking collisions more effective ? Also can anyone please answer this question also ? It’s regarding the Chef Impress question in this month’s cook off. It will be great help for beginners like me. @rishup_nitdgp @akashbhalotia hello sir after reading your editorial I tried to do this question but it got TLE can you please have a look at my solution and explain me where I do something wrong. Any suggestions are welcome sir. KIndly reply after seeing the comment. It will help me to improve me. Refer to this link for understanding the string hashing and how to decrease the probability of collisions. I used hashing creates a hash function and if the value of pairs of the hash function is equal then it is a match. can someone explain to me why I’m getting TLE? @rishup_nitdgp @everule1 @akashbhalotia Please have a look sir dd={} modulo=10**9+7 for i in range(97,123): dd[chr(i)]=i-97+1 def search(pattern,text): n=len(pattern) c=0 e=0 for j in range(n): a=dd[pattern[j]] b=dd[text[j]] c+=(a*(10**(n-j-1)))%modulo e+=(b*(10**(n-j-1)))%modulo return c==e for _ in range(int(input())): s=input() n=len(s) co=0 if n<4: print("0") else: for i in range(2,n,2): a=s[:i//2] b=s[i//2:i] c=s[i:(i+n)//2] d=s[(i+n)//2:] if search(a,b) and search(c,d): co+=1 print(co) Thanks !! I got it now Getting TLE for following code, used KMP string matching. #include<bits/stdc++.h> using namespace std; void computeprefix(int pref[], string P) { pref[0]=0; int k=0, m=P.length(); for(int i=1;i<m;i++) { //recursively get longest proper prefix and suffix in the existing longest proper substring //which is already a prefix and suffix while(k>0 && (P[k]!=P[i])) k=pref[k]; if(P[k] == P[i]) k=k+1; pref[i]=k; } // cout << "prefix function\n"; // for(int i=0;i<m;i++) // cout << pref[i] << " "; // cout << endl; } int findoccurence(string T, string P, int pref[], int st) { int n, m; n=T.length(); m=P.length(); int q=0; for(int i=st;i<n;i++) { while(q>0 && T[i] != P[q]) q=pref[q]; if(T[i] == P[q]) q++; if(q == m) { return i-(m-1); // cout << "Pattern matches at: " << i-(m-1) << " " << endl; // q=pref[q-1]; //for futher remaining text } // cout << i << endl; } return -1; } bool check(string T, string P, int pref[], int st, int n, int m) { if(st == n) return false; int l=n-st; if(l%2 == 1) return false; int sz=(n-st)/2; int ind1=st, ind2=st+sz; for(int i=0;i<sz;i++) { if(T[ind1] != T[ind2]) return false; ind1++; ind2++; } return true; } int main() { int t; cin >> t; while(t--) { string T, P; cin >> T; int n, m, cnt=0; n=T.length(); P=""; P=P+T[0]; set< pair<int, int> > s; for(int i=1;i<=n;) { m=i; int pref[m]; //prefix array for out query computeprefix(pref, P); int val=findoccurence(T, P, pref, i); // cout << val << " " << i << " " << P << endl; if(val == i && check(T, P, pref, val+i, n, m)) cnt++; if(i == n) break; P=P+T[i]; i++; } cout << cnt << endl; } return 0; } can any one tell what is wrong with my solution please @abhishek_iita can you please explain why did we create the lps array of reversed string? because of the T2 (to simplify things) you can treat T2 as T1 when the string is reversed and apply the same rule as that of T1 Why my solution is giving WA? (it passes for sample test cases, and for many custom test cases as well, well formatted) I read about string-hashing on cp-algorithms according to which: hash(i,j) = (hash(0,j)-hash(0,i-1))*(multiplicative modulo inverse of p^i) and have used this in my solution. If my WA solution is difficult to understand, I have commented my code in case here: I have debugged it: here. Check out line 32 of your code and the AC one, and I think you shall find the mistake. Oh! This silly mistake slipped real quick from my eyes. Thank you soo much. @abhishek_iiita I also tried to implement the solution using kmp algorithm, but without any extra lps array and period. Can you explain why the period is calculated for the substring? I used the similar approach as yours for calculating the 2 lps arrays. While iterating over the original string I just checked the length of lps for two halves and if the they exceed the half lengths then I increment the count. Still getting a WA. Here is the link to my solution - Thanks for any help This post was flagged by the community and is temporarily hidden. Here is a case for which your code is failing. 1 abababaa P.S. Your code is not covering the overlapping cases.
https://discuss.codechef.com/t/chefship-editorial/66255/119
CC-MAIN-2020-45
refinedweb
1,160
68.91
Subject: Re: [boost] [tweener] Doxygen documentation issues From: Steven Watanabe (watanabesj_at_[hidden]) Date: 2013-11-05 13:22:55 AMDG On 11/05/2013 10:10 AM, Paul A. Bristow wrote: > >> Doxygen XML is a bit annoying about that. Last time I tried, it seemed to ignore the option > that's >> supposed to hide private members. What we usually use is >> /** INTERNAL ONLY */ which will cause BoostBook to strip it out. > > I've added this in all the namespace detail section and this is effective at reducing the items that > are in the C++ reference section (from Quickbook). > > This is what a typical *user* wants - the implementation stuff in namespace detail (for example) is > just clutter. > > But an *author or maintainer* might want all the detail sections included in the C++ reference > section. > > I can achieve this by horribly hacking the *.*pp files to change all INTERNAL ONLY to INTERNAL AS > WELL - or something. > > Can you suggest a better way of switching this INTERNAL ONLY feature on/off? > These days, I use \cond show_private instead. In Christ, Steven Watanabe Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2013/11/208290.php
CC-MAIN-2019-30
refinedweb
201
67.45
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). I have a dialog with a Treeview. In the Treeview I can detect when a object is selected or right clicked. def ContextMenuCall(self, root, userdata, obj, lColumn, lCommand): if lCommand == ID_SHOWTHUMBNAILS: # Update dialog according to obj return True return False However, in the main dialog I want to react to this right clicking. How do I 'notify' the main dialog? I tried sending a message() or SendParentMessage(), but that did not work. I found a reply here.
https://plugincafe.maxon.net/topic/12242/reacting-on-treeview-user-action
CC-MAIN-2022-05
refinedweb
121
66.74
through static type checking - Providing code completion and refactoring support in many of our favorite editors, like VS Code, WebStorm, and even Vim - Allowing us to use modern language features of JavaScript before they are available in all browsers We think TypeScript is great, and we think many of our Ionic Angular developers would agree. We have also found TypeScript to be beneficial outside of Angular as well. In fact, we use it in our own AppFlow dashboard, which is a large React app. Over the past couple of years, TypeScript has started to gain momentum in the React world and, now, has official support in create-react-app. So, we thought it would be helpful to share a little tutorial on how to kick off a new React project using TypeScript. Using Create-React-App Create-react-app is the CLI for React projects. While not as encompassing as the Angular CLI, it still does a great job at creating new apps (hence its name). Install the latest Create-React-App through npm: npm install -g create-react-app Now, start a new project by running create-react-app, specify your project name, and add the TypeScript flag: create-react-app reacttypescript --typescript cd reacttypescript This will scaffold out a new React project using TypeScript as the language. Next, start up the development server: npm run start And, just like that, your React project is running in TypeScript! Notice that you get a tsconfig.json file for your TypeScript configuration. Feel free to customize the settings in here to fit your needs. React Components in TypeScript Next, let’s create a new component in TypeScript and consume it from our App to see some of the benefits we now receive. Create a new file named SayHello.tsx and paste in the following code: import React from 'react'; interface SayHelloProps { name: string; onGetNewName: () => void; } interface SayHelloState { count: number; } export default class SayHello extends React.Component<SayHelloProps,SayHelloState> { constructor(props: SayHelloProps) { super(props); this.state = { count: 0 }; } render() { const { onGetNewName, name } = this.props; const { count } = this.state; return ( <div> <p>Hello {name}!</p>{' '} <button onClick={() => onGetNewName()}>Enter new name</button> <p>You clicked {count} times</p> <button onClick={() => this.setState({ count: count + 1 })}> Click </button> </div> ); } } While this example is a bit contrived, it does highlight a major benefit of TypeScript, which is statically typing our components by specifying interfaces for the props and state objects. The SayHelloProps and SayHelloState interfaces specify the shape of the props and state objects, which are then passed in as generic arguments to the Component class. Now that our component is statically typed, it is no longer possible to accidentally mistype a name of one of our props or state members, nor can you try to assign it a value that does not match the member’s type. What’s great about Typescript is that you can catch these errors right in your code editor and also during build time, as seen below. Wrapping Up TypeScript helps teams scale their JavaScript projects by providing modern language features, static type checking, and tooling. With the official support of TypeScript in create-react-app, I expect more React users will discover how great this language is and how it can help them with their development. You might be wondering why a post about React on the Ionic blog? Well, in case you haven’t heard, Ionic 4.0 was built from the ground up to work with any framework, and we have some exciting updates about official support for React coming very soon. Expect to see more React content coming from us in the future. In the meantime, check out a sample Ionic React app to see the alpha in action. Want to see more of the benefits of using TypeScript in React? Hit us up in the comments section below or on Twitter with your questions and feedback.
https://blog.ionicframework.com/how-to-use-typescript-in-react/
CC-MAIN-2019-18
refinedweb
652
61.06
XForms is gaining momentum rapidly, with support available for common browsers using extensions or plugins, and through products like the IBM® Workplace Forms technology (see the Resources section at the end of this article to find out more). use actions and events with XForms, and how to control the format of the form's output. Part 1 covered various browsers and their plugins that are necessary for viewing and interacting with XForms documents, so that won't be covered again here. Part 2 showed you how to create an XForms-based form using any of the available controls, how to create a data model, and the different types of basic submission actions that are available. If you are following along, or you already have a plugin working with your favorite browser, you can jump in and download the code for this article and view the example XForms. XForms uses the XML Events standard to attach event handlers to specific elements in a document. XML Events is an XML representation of the DOM event model from HTML. Let's take a quick look at the event model, in case you haven't encountered it before. How events are dispatched. Listing 1 shows you a fairly simple XHTML document. When this is loaded into your Web browser, a DOM tree, like the one in Figure 1 is created to represent this document. Listing 1. A simple XHTML document If a user clicks on the emphasized "this" in the <p> element, an event will begin to travel along the red path in Figure 1; this represents the capture phase. If no event handlers processed the click, the event will travel back up the same path until it comes back to the root element, where the default click handler will ignore it. Figure 1. An event traveling down the DOM tree In HTML, you can attach a JavaScript action to the various events, such as onclick or mouseover, to process the event. For example, if you change the <p> element from Listing 1 to the one in Listing 2, you can click on "this" and have a JavaScript™ alert box pop up for the user. The event is a click, which is handled by an observer (the <em> element), and processed by a handler (the snippet of JavaScript). Listing 2. An XHTML event handler This technique has been working for years; why isn't it good enough for XForms to use? The event name is hard-coded into the language (XHTML in this case); to add a new event, you need to add a new attribute. The event name is also very hardware-specific, such as "onclick" when an element is activated (it could be clicked with a mouse, given focus and activated by pressing Return on a keyboard, or triggered in any other way you can imagine). You can only use one scripting language to create a handler, since you can't have more than one instance of an attribute in the observer. And, finally, the event handling scripts are intertwined with the form's presentation markup. XForms solves these problems by applying XML and leveraging the XML Events standard (see Resources). The event, observer, and handler are specified as part of an XForms control. For example, the XHTML in Listing 3 will pop up an alert when clicked. Listing 3. XHTML event and handler embedded in a <button>element Listing 4 shows you how to accomplish the same thing using XForms. Listing 4. XForms event and handler This creates an observer, watching for the DOMActivate event (the same as the onclick event in XHTML) to the <xf:message> action (which displays an alert message) to the trigger's. When you click the trigger, it pops up an alert. The XForms standard lists a large number of events, which can be aimed at a control (any of the <xf:group>, <xf:input>, <xf:output>, <xf:range>, <xf:submit>, <xf:secret>, <xf:select>, <xf:select1>, <xf:textarea>, <xf:trigger>, or <xf:upload> elements), the model, <xf:submission>, and the instance. Check the Resources section for sources of information about the various events and how to use them. As you saw in Listing 4, an XForms event handler (such as <xf:message>) is attached to the specified event. There are a number of XForms actions that can be used in an event handler: <xf:action>-- A container for other actions, which will be invoked in the order specified inside the <xf:action>element. <xf:dispatch>-- Dispatch an event (either one you've made up, or one of the predefined XForms events) to a given target element. You can specify whether or not the event bubbles up towards the root if the target element doesn't handle it. <xf:load>-- Load the specified URL, either in a new window, or replacing the current document. <xf:message>-- Display a specified message (which can be in the instance, loaded from an external file, or encapsulated in the <xf:message>element) to the user. <xf:rebuild>-- Cause the XForms processor to rebuild any internal data structures used to track dependencies between instance data elements. Causes an xforms-rebuild event. <xf:recalculate>- Recalculate and update instance data elements. Causes an xforms-recalculateevent. <xf:refresh>-- Refresh the XForms user interface, updating the controls to reflect the current state of the instance. Causes an xforms-refreshevent. <xf:reset>-- Reset the form by dispatching an xforms-resetevent. You probably don't ever need to use this, since users never reset forms to their original state. <xf:revalidate>-- Revalidate the instance data as specified by the processing model. Causes an xforms-revalidateevent. <xf:send>-- Dispatches an xforms-submitevent, activating the form's submission processing. <xf:setfocus>-- Dispatch an xforms-focusevent to the specified control. This can be used to implement accessibility features, etc. <xf:setvalue>-- Explicitly set the value of the specified data instance element. Let's take the simple search form created in the first two parts of this series. As you'll recall, it has a single text entry field for search keywords, and one button for submitting the search. In Listing 5, some DOMFocusIn event handlers (which have been highlighted) are added to add "ephermeral" pop-ups to the controls. Listing 5. Adding helpful messages with <xf:message> These are often rendered as tool tips or similar mini-windows, as shown in Figure 2. Figure 2. The <xf:message>sin action Now that you have a good foundation for handling events with XForms, take a look next at some more advanced submission topics and options. Submission formats and options Part 2 of this series took a quick look at the various basic XForms submission methods, which are recapped here: <xf:submission is the same as <form method="post" enctype="multipart/form-data"> in HTML, and can be handled at the receiving URL in the same way. The data instance is serialized as multipart form data. <xf:submission is the same as <form method="get"> in HTML. The data instance is serialized with URL encoding and appended to the specified URL. <xf:submission sends the form's current data model instance to the specified URL as an XML document. There is no equivalent in HTML. <xf:submission writes the form's current data model instance to the specified URL as an XML file, assuming you have PUT permissions for that URL on the server. There is no equivalent in HTML. <xf:submission is the same as <form method="post" enctype="application/x-www-form-urlencoded"> in HTML. The data instance is serialized as multipart related data. The put and get methods support file: and http:/https: URLs, while the others support http:/https: and mailto: URLs. The <xf:submission> element allows several optional attributes to help control the XML created out of the data model instance by the XForms processor: version-- Specify the version of XML to create (default is "1.0" obviously) indent-- Specify whether the generated XML should include whitespace to improve readability mediatype-- Specify the MIME type for the generated XML (you should make sure the media type is compatible with application/xml, the default) encoding- Specify a text encoding for the generated XML omit-xml-declaration-- Specify whether to leave out the XML declaration standalone- Specify whether to include a standalone declaration in the XML cdata-section-elements- A comma-separated list of elements that should be serialized as CDATA sections includenamespaceprefixes-- A comma-separated list of namespaces; elements matching these namespaces are included in the output XML The separator attribute can also be used to specify the character used to separate key=value pairs in URL-encoding (the default is ";"). Modifying the basic search form slightly to use a put to a file so you can see its XML output gives the results in Listing 6, which shows you what the default XML created by XForms from the data model instance looks like. It includes the XML namespace declarations from the XForms document, which aren't necessarily appropriate for the instance. Listing 6. Sample XForms XML output Adding includenamespaceprefixes="#default" to the <xf:submission> element provides you with the document in Listing 7, which only includes the default namespaces that you've assigned to the model. In this case, there are no namespaces, and you get a basic, unadorned XML document containing only the data from the data model instance. Listing 7. Default namespace only Using <xf:submission> and its options, you have great control over how your data is sent to the server. With XForms, it's entirely possible to have a form without any sort of <xf:submission> element. The data lives and dies with the user's Web browser, while still doing something useful. For example, you could pretend that you're a high school computer science student (see Figure 3), and build a temperature converter. Figure 3. A high school computer science assignment: Make a temperature converter This is a handy tool (see Listing 8) if you've got Internet friends who complain about how hot or cold their area is, and you want to quickly figure out what the problem is with their temperature. Listing 8. Temperature converter, XForms style The XForms bits in Listing 8 have been highlighted. First you declare a really simple data model, one that knows about a single temperature in both the Celcius (Metric) and Fahrenheit (Imperial) scales. You only fill in one data value; after all, you want a computer to do this conversion for you instead of figuring it out. You've also created two input fields that let the user enter a temperature value in either scale. Note that you haven't used the <xf:bind> element to constrain the input data to numbers; that's one of the adavanced XForms topics not covered in this series. Finally, make one button for converting Celcius to Fahrenheit, and one button for going the other way. Inside the <xf:action> element, use <xf:setvalue> to calculate the other temperature without using any sort of scripting (that's just an XPath expression in the value attribute), and definitely without requiring any sort of server intervention. This temperature converter is what XForms is all about, providing a richer client experience without taxing the Web server. This article provided a quick overview of the various XForms actions and showed how you can create observers for XForms events inside of XForms controls. You also looked at the various kinds of submission techniques, and the options you can use to control how your data is sent to the server. This series of articles has provided you with the basics to get up and running to create your own XForms forms. You can now begin crafting your own XForms experiments! Information about download methods Learn - Visit the XForms home at W3C. - XForms.org: The Nexus for Intelligent Web Apps contains a treasure-trove of information and links on XForms. - Read Part 2 in this series, Introduction to XForms. - Read Part 1 in this series, Introduction_3<<.
http://www.ibm.com/developerworks/xml/library/x-xformsintro3/
crawl-003
refinedweb
2,003
59.03
Before considering class and interface declarations in Java, it is essential that you understand the object-oriented model used by the language. No useful programs can be written in Java without using objects. Java deliberately omits certain C++ features that promote a less object-oriented style of programming. Thus, all executable code in a Java program must be part of an object (or a class to be more precise). The two main characteristics of objects in Java are: An object is a collection of variables, associated methods, and other associated classes. Objects in Java are described by classes; a particular object is an instance of a particular class. A class describes the data an object can contain by defining variables to contain the data in each instance of the class. A class describes the behavior of an object by defining methods for the class and possibly other auxiliary classes. Methods are named pieces of executable code; they are similar to what other programming languages call functions or procedures. Collectively, the variables, methods, and auxiliary classes of a class are called its members. A class can define multiple methods with the same name if the number or type of parameters for each method is different. Multiple methods with the same name are called overloaded methods. Like C++, Java supports overloaded methods, but unlike C++, Java does not support overloaded operators. Overloaded methods are useful when you want to describe similar operations on different types of data. For example, Java provides a class called java.io.OutputStream that is used to write data. The OutputStream class defines three different write() methods: one to write a single byte of data, another to write some of the bytes in an array, and another to write all of the bytes in an array. References Class Declarations Encapsulation is the technique of hiding the details of the implementation of an object, while making its functionality available to other objects. When encapsulation is used properly, you can change an object's implementation without worrying that any other object can see, and therefore depend on, the implementation details. The portion of an object that is accessible to other types of objects is called the object's interface.[1] For example, consider a class called Square. The interface for this class might consist of: . The implementation of this Square class would include executable code that implements the various methods, as well as an internal variable that an object would use to remember its size. Variables that an object uses to remember things about itself are called state variables. The point of the distinction between the interface and the implementation of a class is that it makes programs easier to maintain. The implementation of a class may change, but as long as the interface remains the same, these changes do not require changes to any other classes that may use the class. In Java, encapsulation is implemented using the public, protected, and private access modifiers. If a field of a class is part of the interface for the class, the field should be declared with the public modifier or with no access modifier. The private and protected modifiers limit the accessibility of a field, so these modifiers should be used for state variables and other implementation-specific functionality. Here's a partial definition of a Square class that has the interface just described: class Square { private int sideLength; public void setSideLength(int len) { sideLength = len; } public int getSideLength() { return sideLength; } public void draw(int x, int y) { // code to draw the square ... } } References Method modifiers; Inner class modifiers; Variable modifiers An object is typically created using an allocation expression. The newInstance() methods of the Class or java.lang.reflect.Contructor class can also be used to create an instance of a class. In either case, the storage needed for the object is allocated by the system. When a class is instantiated, a special kind of method called a constructor is invoked. A constructor for a class does not have its own name; instead it has the same name as the class of which it is a part. Constructors can have parameters, just like regular methods, and they can be overloaded, so a class can have multiple constructors. A constructor does not have a return type. The main purpose of a constructor is to do any initialization that is necessary for an object. If a class declaration does not define any constructors, Java supplies a default public constructor that takes no parameters. You can prevent a class from being instantiated by methods in other classes by defining at least one private constructor for the class without defining any public constructors. References Class; Constructors; Object Allocation Expressions Java does not provide any way to explicitly destroy an object. Instead, an object is automatically destroyed when the garbage collector detects that it is safe to do so. The idea behind garbage collection is that if it is possible to prove that a piece of storage will never be accessed again, that piece of storage can be freed for reuse. This is a more reliable way of managing storage than having a program explicitly deallocate its own storage. Explicit memory allocation and deallocation is the single largest source of programming errors in C/C++. Java eliminates this source of errors by handling the deallocation of memory for you. Java's garbage collector runs continuously in a low priority thread. You can cause the garbage collector to take a single pass through allocated storage by calling System.gc(). Garbage collection will never free storage before it is safe to do so. However, garbage collection usually does not free storage as soon as it would be freed using explicit deallocation. The logic of a program can sometimes help the garbage collector recognize that it is safe to free some storage sooner rather than later. Consider the following code: class G { byte[] buf; String readIt(FileInputStream f) throws IOException { buf = new byte[20000]; int length = f.read(buf); return new String(buf, 0, 0, length); } } The first time readIt() is called, it allocates an array that is referenced by the instance variable buf. The variable buf continues to refer to the array until the next time that readIt() is called, when buf is set to a new array. Since there is no longer any reference to the old array, the garbage collector will free the storage on its next pass. This situation is less than optimal. It would be better if the garbage collector could recognize that the array is no longer needed once a call to readIt() returns. Defining the variable buf as a local variable in readIt() solves this problem: class G { String readIt(FileInputStream f) throws IOException { byte[] buf; buf = new byte[20000]; int length = f.read(buf); return new String(buf, 0, 0, length); } } Now the reference to the array is in a local variable that disappears when readIt() returns. After readIt() returns, there is no longer any reference to the array, so the garbage collector will free the storage on its next pass. Just as a constructor is called when an object is created, there is a special method that is called before an object is destroyed by the garbage collector. This method is called a finalizer ; it has the name finalize(). A finalize() method is similar to a destructor in C++. The finalize() method for a class must be declared with no parameters, the void return type, and no modifiers. A finalizer can be used to clean up after a class, by doing such things as closing files and terminating network connections. If an object has a finalize() method, it is normally called by the garbage collector before the object is destroyed. A program can also explicitly call an object's finalize() method, but in this case, the garbage collector does not call the method during the object destruction process. If the garbage collector does call an object's finalize() method, the garbage collector does not immediately destroy the object because the finalize() method might do something that causes a variable to refer to the object again.[2] Thus the garbage collector waits to destroy the object until it can again prove it is safe to do so. The next time the garbage collector decides it is safe to destroy the object, it does so without calling the finalizer again. In any case, a finalize() method is never called more than once for a particular object. [2] A finalize() method should not normally do something that results in a reference to the object being destroyed, but Java does not do anything to prevent this situation from happening. [2] A finalize() method should not normally do something that results in a reference to the object being destroyed, but Java does not do anything to prevent this situation from happening. The garbage collector guarantees that the thread it uses to call a finalize() method will not be holding any programmer-visible synchronization locks when the method is called. This means that a finalize() method never has to wait for the garbage collector to release a lock. If the garbage collector calls a finalize() method and the finalize() method throws any kind of exception, the garbage collector catches and ignores the exception. References System; The finalize method One of the most important benefits of object-oriented programming is that it promotes the reuse of code, particularly by means of inheritance. Inheritance is a way of organizing related classes so that they can share common code and state information. Given an existing class declaration, you can create a similar class by having it inherit all of the fields in the existing definition. Then you can add any fields that are needed in the new class. In addition, you can replace any methods that need to behave differently in the new class. To illustrate the way that inheritance works, let's start with the following class definition: class RegularPolygon { private int numberOfSides; private int sideLength; RegularPolygon(int n, int len) { numberOfSides = n; sideLength = len; } public void setSideLength(int len) { sideLength = len; } public int getSideLength() { return sideLength; } public void draw(int x, int y) { // code to draw the regular polygon ... } } The RegularPolygon class defines a constructor, methods to set and get the side length of the regular polygon, and a method to draw the regular polygon. Suppose that after writing this class you realize that you have been using it to draw a lot of squares. You can use inheritance to build a more specific Square class from the existing RegularPolygon class as follows: class Square extends RegularPolygon { Square(int len) { super(4,len); } } The extends clause indicates that the Square class is a subclass of the RegularPolygon class, or looked at another way, RegularPolygon is a superclass of Square. When one class is a subclass of another class, the subclass inherits all of the fields of its superclass that are not private. Thus Square inherits setSideLength(), getSideLength(), and draw() methods from RegularPolygon. These methods work fine without any modification, which is why the definition of Square is so short. All the Square class needs to do is define a constructor, since constructors are not inherited. There is no limit to the depth to which you can carry subclassing. For example, you could choose to write a class called ColoredSquare that is a subclass of the Square class. The ColoredSquare class would inherit the public methods from both Square and RegularPolygon. However, ColoredSquare would need to override the draw() method with an implementation that handles drawing in color. Having defined the three classes RegularPolygon, Square, and ColoredSquare, it is correct to say that RegularPolygon and Square are superclasses of ColoredSquare and ColoredSquare and Square are subclasses of RegularPolygon. To describe a relationship between classes that extends through exactly one level of inheritance, you can use the terms immediate superclass and immediate subclass. For example, Square is an immediate subclass of RegularPolygon, while ColoredSquare is an immediate subclass of Square. By the same token, RegularPolygon is the immediate superclass of Square, while Square is the immediate superclass of ColoredSquare. A class can have any number of subclasses or superclasses. However, a class can only have one immediate superclass. This constraint is enforced by the syntax of the extends clause; it can only specify the name of one superclass. This style of inheritance is called single inheritance ; it is different from the multiple inheritance scheme that is used in C++. Every class in Java (except Object) has the class Object as its ultimate superclass. The class Object has no superclass. The subclass relationships between all of the Java classes can be drawn as a tree that has the Object class as its root. Another important difference between Java and C++ is that C++ does not have a class that is the ultimate superclass of all of its classes. References Class Inheritance; Interfaces; Object If a class is declared with the abstract modifier, the class cannot be instantiated. This is different than C++, which has no way of explicitly specifying that a class cannot be instantiated. An abstract class is typically used to declare a common set of methods for a group of classes when there are no reasonable or useful implementations of the methods at that level of abstraction. For example, the java.lang package includes classes called Byte, Short, Integer, Long, Float, and Double. These classes are subclasses of the abstract class Number, which declares the following methods: byteValue(), shortValue(), intValue(), longValue(), floatValue(), and doubleValue(). The purpose of these methods is to return the value of an object converted to the type implied by the method's name. Every subclass of Number implements all of these methods. The advantage of the abstraction is that it allows you to write code to extract whatever type of value you need from a Number object, without knowing the actual type of the underlying object. Methods defined in an abstract class can be declared abstract. An abstract method is declared without any implementation; it must be overridden in a subclass to provide an implementation. References Class Modifiers; Inner class modifiers; Local class modifiers; Method modifiers; Number If a class is declared with the final modifier, the class cannot be subclassed. Declaring a class final is useful if you need to ensure the exact properties and behavior of that class. Many of the classes in the java.lang package are declared final for that reason. Methods defined in a non-abstract class can be declared final. A final method cannot be overridden by any subclasses of the class in which it appears. References Class Modifiers; Inner class modifiers; Local class modifiers; Method modifiers Java provides a construct called an interface to support certain multiple inheritance features that are desirable in an object-oriented language. An interface is similar to a class, in that an interface declaration can define both variables and methods. But unlike a class, an interface cannot provide implementations for its methods. A class declaration can include an implements clause that specifies the name of an interface. When a class declaration specifies that it implements an interface, the class inherits all of the variables and methods declared in that interface. The class declaration must then provide implementations for all of the methods declared in the interface, unless the class is declared as an abstract class. Unlike the extends clause, which can only specify one class, the implements clause can specify any number of interfaces. Thus a class can implement an unlimited number of interfaces. Interfaces are most useful for declaring that an otherwise unrelated set of classes have a common set of methods, without needing to provide a common implementation. For example, if you want to store a variety of objects in a database, you might want all of the those objects to have a common set of methods for storing and fetching. Since the fetch and store methods for each object need to be different, it is appropriate to declare these methods in an interface. Then any class that needs fetch and store methods can implement the interface. Here is a simplistic example that illustrates such an interface: public interface Db { void dbStore(Database d, Object key); Object dbFetch(Database d, Object key); } The Db interface declaration contains two methods, dbStore() and dbFetch(). Here is a partial class definition for a class that implements the Db interface: class DbSquare extends Square implements Db { public void dbStore(Database d, Object key) { // Perform database operation to store Square ... } public Square dbFetch(Database d, Object key) { // Perform database operation to fetch Square ... } ... } The DbSquare class defines implementations for both of the methods declared in the Db interface. The point of this interface is that it provides a uniform way for unrelated objects to arrange to be stored in a database. The following code shows part of a class that encapsulates database operations: class Database { ... public void store(Object o, Object key) { if (o instanceof Db) ((Db)o).dbStore(this, key); } ... } When the database is asked to store an object, it does so only if the object implements the Db interface, in which case it can call the dbStore() of the object. References Interface Declarations Java 1.1 provides a new feature that allows programmers to encapsulate even more functionality within objects. With the addition of inner classes to the Java language, classes can be defined as members of other classes, just like variables and methods. Classes can also be defined within blocks of Java code, just like local variables. The ability to declare a class inside of another class allows you to encapsulate auxiliary classes inside of a class, thereby limiting access to the auxiliary classes. A class that is declared inside of another class may have access to the instance variables of the enclosing class; a class declared within a block may have access to the local variable and/or formal parameters of that block. A nested top-level class or interface is declared as a static member of an enclosing top-level class or interface. The declaration of a nested top-level class uses the static modifier, so you may also see these classes called static classes. A nested interface is implicitly static, but you can declare it to be static to make it explicit. Nested top-level classes and interfaces are typically used to group related classes in a convenient way. A nested top-level class or interface functions like a normal top-level class or interface, except that the name of the nested entity includes the name of the class in which it is defined. For example, consider the following declaration: public class Queue { ... public static class EmptyQueueException extends Exception { } ... } Code that calls a method in Queue that throws an EmptyQueueException can catch that exception with a try statement like this: try { ... } catch (Queue.EmptyQueueException e) { ... } A nested top-level class cannot access the instance variables of its enclosing class. It also cannot call any non-static methods of the enclosing class without an explicit reference to an instance of that class. However, a nested top-level class can use any of the static variables and methods of its enclosing class without qualification. Only top-level classes in Java can contain nested top-level classes. In other words, a static class can only be declared as a direct member of a class that is declared at the top level, directly as a member of a package. In addition, a nested top-level class cannot declare any static variables, static methods, or static initializers. References Class Declarations; Methods; Nested Top-Level and Member Classes; Variables A member class is an inner class that is declared within an enclosing class without the static modifier. Member classes are analogous to the other members of a class, namely the instance variables and methods. The code within a member class can refer to any of the variables and methods of its enclosing class, including private variables and methods. Here is a partial definition of a Queue class that uses a member class: public class Queue { private QueueNode queue; ... public Enumeration elements() { return new QueueEnumerator(); } ... private class QueueEnumerator implements Enumeration { private QueueNode start, end; QueueEnumerator() { synchronized (Queue.this) { if (queue != null) { start = queue.next; end = queue; } } } public boolean hasMoreElements() { return start != null; } public synchronized Object nextElement() { ... } } private static class QueueNode { private Object obj; QueueNode next; QueueNode(Object obj) { this.obj = obj; } Object getObject() { return obj; } } } The QueueEnumerator class is a private member class that implements the java.util.Enumeration interface. The advantage of this approach is that the QueueEnumerator class can access the private instance variable queue of the enclosing Queue class. If QueueEnumerator were declared outside of the Queue class, this queue variable would need to be public, which would compromise the encapsulation of the Queue class. Using a member class that implements the Enumeration interface provides a means to offer controlled access to the data in a Queue without exposing the internal data structure of the class. An instance of a member class has access to the instance variables of exactly one instance of its enclosing class. That instance of the enclosing class is called the enclosing instance. Thus, every QueueEnumerator object has exactly one Queue object that is its enclosing instance. To access an enclosing instance, you use the construct ClassName.this. The QueueEnumerator class uses this construct in the synchronized statement in its constructor to synchronize on its enclosing instance. This synchronization is necessary to ensure that the newly created QueueEnumerator object has exclusive access to the internal data of the Queue object. The Queue class also contains a nested top-level, or static, class, QueueNode. However, this class is also declared private, so it is not accessible outside of Queue. The main difference between QueueEnumerator and QueueNode is that QueueNode does not need access to any instance data of Queue. A member class cannot declare any static variables, static methods, static classes, or static initializers. Although member classes are often declared private, they can also be public or protected or have the default accessibility. To refer to a class declared inside of another class from outside of that class, you prefix the class name with the names of the enclosing classes, separated by dots. For example, consider the following declaration: public class A { public class B { public class C { ... } ... } ... } Outside of the class named A, you can refer to the class named C as A.B.C. References Class Declarations; Field Expressions; Methods; Nested Top-Level and Member Classes; Variables A local class is an inner class that is declared inside of a block of Java code. A local class is only visible within the block in which it is declared, so it is analogous to a local variable. However, a local class can access the variables and methods of any enclosing classes. In addition, a local class can access any final local variables or method parameters that are in the scope of the block that declares the class. Local classes are most often used for adapter classes. An adapter class is a class that implements a particular interface, so that another class can call a particular method in the adapter class when a certain event occurs. In other words, an adapter class is Java's way of implementing a "callback" mechanism. Adapter classes are commonly used with the new event-handling model required by the Java 1.1 AWT and by the JavaBeans API. Here is an example of a local class functioning as an adapter class: public class Z extends Applet { public void init() { final Button b = new Button("Press Me"); add(b); class ButtonNotifier implements ActionListener { public void actionPerformed(ActionEvent e) { b.setLabel("Press Me Again"); doIt(); } } b.addActionListener(new ButtonNotifier()); } ... } The above example is from an applet that has a Button in its user interface. To tell a Button object that you want to be notified when it is pressed, you pass an instance of an adapter class that implements the ActionListener interface to its addActionListener() method. A class that implements the ActionListener interface is required to implement the actionPerformed() method. When the Button is pressed, it calls the adapter object's actionPerformed() method. The main advantage of declaring the ButtonNotifier class in the method that creates the Button is that it puts all of the code related to creating and setting up the Button in one place. As the preceding example shows, a local class can access local variables of the block in which it is declared. However, any local variables that are accessed by a local class must be declared final. A local class can also access method parameters and the exception parameter of a catch statement that are accessible within the scope of its block, as long as the parameter is declared final. The Java compiler complains if a local class uses a non-final local variable or parameter. The lifetime of a parameter or local variable is extended indefinitely, as long as there is an instance of a local class that refers to it. References Blocks; Class Declarations; Local Classes; Local Variables; Method formal parameters; Methods; The try Statement; Variables An anonymous class is a kind of local class that does not have a name and is declared inside of an allocation expression. As such, an anonymous class is a more concise declaration of a local class that combines the declaration of the class with its instantiation. Here is how you can rewrite the previous adapter class example to use an anonymous class instead of a local class: public class Z extends Applet { public void init() { final Button b = new Button("Press Me"); add(b); b.addActionListener(new ActionListener () { public void actionPerformed(ActionEvent e) { b.setLabel("Press Me Again"); } } ); } ... } As you can see, an anonymous class is declared as part of an allocation expression. If the name after new is the name of an interface, as is the case in the preceding example, the anonymous class is an immediate subclass of Object that implements the given interface. If the name after new is the name of a class, the anonymous class is an immediate subclass of the named class. Obviously, an anonymous class doesn't have a name. The other restriction on an anonymous class is it can't have any constructors other than the default constructor. Any constructor-like initialization must be done using an instance initializer. Other than these differences, anonymous classes function just like local classes. References Allocation Expressions; Class Declarations; Instance Initializers; Object It is possible to use inner classes without knowing anything about how they are implemented. However, a high-level understanding can help you comprehend the filenames that the compiler produces, and also some of the restrictions associated with inner classes. The implementation of inner classes is less than transparent in a number of ways, primarily because the Java virtual machine does not know about inner classes. Instead, the Java compiler implements inner classes by rewriting them in a form that does not use inner classes. The advantage of this approach is that the Java virtual machine does not require any new features to be able to run programs that use inner classes. Since a class declared inside another class is rewritten by the compiler as an external class, the compiler must give it a name unique outside of the class in which it is declared. The unique name is formed by prefixing the name of the inner class with the name of the class in which it is declared and a dollar sign ($). Thus, when the Queue class is compiled, the Java compiler produces four .class files: Because anonymous classes do not have names, the Java compiler gives each anonymous class a number for a name; the numbers start at 1. When the version of the Z applet that uses an anonymous class is compiled, the Java compiler produces two .class files: In order to give an inner class access to the variables of its enclosing instance, the compiler adds a private variable to the inner class that references the enclosing instance. The compiler also inserts a formal parameter into each constructor of the inner class and passes the reference to the enclosing instance using this parameter. Therefore, the QueueEnumerator class is rewritten as follows: class Queue$QueueEnumerator implements Enumeration { private Queue this$0; private QueueNode start, end; QueueEnumerator(Queue this$0) { this.this$0 = this$0; synchronized (this$0) { if (queue != null) { start = queue.next; end = queue; } } } ... } As you can see, the compiler rewrites all references to the enclosing instance as this$0. One implication of this implementation is that you cannot pass the enclosing instance as an argument to its superclass's constructor because this$0 is not available until after the superclass's constructor returns.
https://docstore.mik.ua/orelly/java/langref/ch05_03.htm
CC-MAIN-2019-26
refinedweb
4,803
51.89
Intermediate-level techniques improve security and reliability Document options requiring JavaScript are not displayed Help us improve this content Level: Intermediate Cameron Laird (claird@phaseit.net), Consultant, Phaseit, Inc. 04 May 2007 Are you tired of spending countless hours devoted to fixing memory faults? Do you find yourself constantly being bogged down in programs that leak memory, violate memory bounds, use uninitialized data, and devote an excessive amount of run time to memory management? Use this article to help you conquer these pesky memory defects. An earlier article (see Resources) urged a general approach to development with C, C++, or Java of applications free of memory faults. It outlined the importance of memory management, the errors most often made in its programming, and strategies for preventing and correcting them. That outline leaves plenty of detail still to explain. Coding errors that were well understood decades ago continue to afflict a huge percentage of all programs in C and Java. This is frustrating to me personally because many times when working with new clients, it is first necessary to clean up memory misallocations and related errors before addressing any subtelties of thread race conditions, esoteric transaction semantics, or performance analyses. Although memory and related errors should be elementary by now, they remain widely misunderstood in the field. Still, there's good news--all these problems are correctable. And don't worry: you don't need to contravene any physical laws to boost program quality several levels, only shift your habits a bit. Favorable environment for improvement of memory management This article focuses attention on specific programming tools that are very useful. Keep in mind throughout the explanations that teams often make the most progress with coding quality when they balance three important ideas: "Eyeballs on the code" means having plenty of peer review--through pair programming, inspection, or other techniques. The aim is that all artifacts that contribute to a program--all the source, all the build procedures, and so on--should be read. Segments that compile and link successfully and do not appear to cause any obvious problems often receive the benefit of the doubt. They shouldn't. Expectations need to be higher: at the very least, all the source code needs to be read and understood by someone. Automation's slogan is to "make it executable". If your application needs to launch and initialize in four seconds, make that into an executable test; if there's been an issue with project builds, make sure that a reference host generates everything from scratch, automatically, every night. Generally, the way to advance in computing systems is to figure out how something should work, and turn that answer into a tool, library, or practice that can be automated to supply the conclusion tirelessly. Any time you feel irritation or boredom that you are having to repeat something obvious or tedious, it's time to celebrate that you've identified another opportunity to automate. A significant challenge in writing articles on this subject is to balance attention on the tools that help automation, with the attitude that supports it. More than any single tool, what helps make for success is diligence in pursuit of quality. Product expectations An advantage of working in the area of code quality, and especially with memory faults, is because there's real progress in the marketplace. For other tool domains--editors, configuration management, estimation--advances through the years seem to have been only marginal, and it's rational to choose a toolset based on subjective "fit". Plenty of teams edit and manage their code the same way they did a decade or more ago. The vendors of analysis tools, though, have distinguished themselves in my experience by the quality of their after-sale support and enhancements. I've been repeatedly surprised both by the problems good automatic tools are able to uncover, and the enthusiasm of the major vendors for working with customers to solve subtle puzzles. You should expect your tools to improve through time, along with your own knowledge of how to resolve the faults you find. The most delicate of these three elements is expertise. I can quickly set up systems that prepare daily reports which detail, for example, that the code in the vicinity of line #285 of my_source.c might contribute to a memory leak, and the best organizations recognize that they need to correct this symptom. What specific changes are necessary in the source to achieve this? The answer isn't always obvious. The Samba development team, for instance, justly prides itself on the quality of its source. When Coverity, Inc. donated error reports in 2006 back to the development team based on scans with its own source analysis product, the Samba programmers turned around 198 of the faults within the first week. What is interesting, though, is that the other eighteen reports, many of which the Samba team initially categorized as "false positives", eventually were (nearly) all rewritten. Some issues of coding quality are subtle enough that even the best programmers need a bit of time to analyze and judge them. Rewriting code to solve memory-management errors is an area where deep expertise pays off. Make sure your most experienced programmers are available to advise your team on this, and take advantage of the experts. My experience with program-analysis vendors is that their support teams are unusually responsive; if you're using open-source tools, on the other hand, you can access lively mailing lists or related channels to work through thorny questions. With enough eyeballs, automation, and expertise, you can reasonably expect to solve all source quality problems. Realize what a change this is: much of the culture of C, which C++ and Java inherited, crystallized over twenty years ago, when lint was deeply flawed, yet represented the state-of-the-art in addressing code quality. Source code past a page in length used to be at least slightly mysterious and out-of-control. We can do better now. It's entirely reasonable to set and achieve the goal of error-free, warning-free, leak-free, thoroughly-inspected, and well-styled source code. Here are a few tools to reach this goal: lint Targets for program analysis Start with the compiler you're already using. Most of this article focuses on C, although the same principles apply with C++ and Java. Take advantage of -O -Wall -W -Wshadow -pedantic or the equivalent for your compiler, that is, the compiler directive that reports most usefully about incorrect and questionable syntax. Your goal should be to have a clean log: when you build your entire project from scratch, no compiler diagnostics should appear. -O -Wall -W -Wshadow -pedantic This claim may sound alien to you. Plenty of development teams demonstrate their sincere belief that diagnostic faults are as inevitable as proverbial death or taxes. That's not so! Even large codebases, including hundreds of thousands or even millions of lines of source, can be systematically cleaned of all statically-diagnosable faults. To do so brings two benefits: And the correct target for warning level is in the range from -Wall to -Wall -W -Wshadow -Wredundant-decls ... -pedantic, not the -W-free compilation that many teams wrongly take as default. Reasonable experts can differ over good style in handling, say, diagnostics about cast alignment; no one, though, should pretend that it's a real advantage to turn off warnings about blithe confusion of pointers and integer data or uninitialized variables. -Wall -Wall -W -Wshadow -Wredundant-decls ... -pedantic -W A few examples help demonstrate these arguments. Remember: a first step toward quality, especially in regard to memory management, is warning-free compilation. Code examples It's easy to forget how weak directive-free compilation is. Consider, for a first example, this blatant, undiagnosed memory error: Listing 1. Example of blatant, undiagnosed memory error. /* Compile with "cc -c example1.c" and "cc -c -O -Wall example1.c".. */ #include <stdio.h> int main() { int j; printf("%d.\n", j); return 0; } j gcc cc -c -O -Wall example1.c ... warning: 'j' is ... uninitialized ... Even disciplined and experienced development teams that practice good habits with inspections and unit tests occasionally generate such errors. Automatic checks are essential complements; any organization that doesn't already check for such diagnostics at least daily urgently needs to change. I've worked with programming teams of all sizes and situations for decades, and have yet to encounter one that didn't profit from at least the lint or -Wall level of diagnostic automation. Recognize that this is not "your father's lint". While compiler warnings a couple of decades ago had the reputation of generating a lot of noise--both false positives and false negatives--they've improved dramatically. Moreover, several competing proprietary products, including those from FlexeLint, Coverity, Grammatech, Parasoft, and Klocwork, offer even more value. While all these alternatives continue to improve, they also reward expertise. Here's an example of source that challenges both static and run-time analysis: Listing 2. Example of difficult diagnosis. struct a { int b; int c; }; void f2(), f3(int); int f1(int thing) { struct a x; if (thing < 0) x.b = 3; f2(); if (thing < -3) f3(x.b); return 0; } x.b thing < -3 thing < 0 There are several possible responses; abandonment of analysis is certainly neither necessary nor desirable: /*NOUNINITIALIZED*/ f3(x.b) x struct a x = {0}; struct a x; struct a { int b; int c; }; void f2(), f3(int); void initialize_a(int thing, struct a *xptr) { if (thing < 0) xptr->b = 3; } int f1(int thing) { struct a x; initialize_a(thing, &x); f2(); if (thing < -3) f3(x.b); return 0; } How to choose a toolset Choice of code analysis tools is complex enough to deserve a whole series of articles by itself. Don't let uncertainty slow you, though: a few simple tips will help you choose one tool as a starting point. Your own experience over a few weeks or months should reveal whether you need to supplement or replace your initial choice. Moreover, nearly all the proprietary vendors have options for evaluation, so, whether you choose an open-source or fee-licensed product, you can begin testing it against your own programs today. For the purpose of this article, analysis tools fall in two broad categories: static and run-time. Static analysis tools work as lint does: they scan source code and "reason" over the constructs there to report errors and difficulties. Along with their analytic sophistication, tools offer more value through their integration and usability. A simple lint has a fixed collection of errors it reports. A better tool typically has both graphical user interface (GUI) and command-line views to shorten the distance between problem detection and resolution. Also, good tools can be configured to "learn" local judgments: which code constructs your team allows or discourages. The best static analysis tools scan all of an application or suite's source. This gives the tool the opportunity to analyze non-local conditions--for example, that thirty different Java source files pass the same type to a particular constructor, but there's also one instance of a syntactically-allowed but distinct type in a single case. Heuristic tests of this sort are an exciting area for current research. Run-time tools present a distinct pattern of use and functionality. They execute the program in a special environment, or, even more commonly, "instrument" the program to report on itself when executed in a standard manner. The aim is to execute low-level instructions and simultaneously analyze those instructions against such rules as: As a practical matter, I find these arguments unpersuasive. I like and use several of the static and dynamic tools. For me, the differences generally have to do with other aspects of usage: dynamic tools have the potential to manage third-party libraries for which source isn't available, dynamic tools can produce illuminating reports when run by end-users, while protecting the details of those end-users' runs, and run-time tools isolate complicated path-dependent memory faults that are difficult to solve otherwise. On the other hand, run-time tools slow execution speed, sometimes unacceptably so, and many developers appear to find their reports harder to understand. Most telling in some circumstances are licensing terms: specific licensing clauses might preclude use of a particular product. As hinted above, analysis tools are available for languages beside C. While Java and higher-level languages deserve their reputations as safer than C--that is, likely to hide a lower incidence of undetected errors--good tools are available for many of them, including Fortran, Java, Python, and Ruby. Even functional languages, which are immune to many of the problems of C, can code memory leaks and other faults which traditional debugging largely misses. Conclusions Programs are buggy, and applications coded in C in particular are subject to a host of memory faults. As common as these and related errors are, though, they're solvable. Methodical adoption of well-known techniques of code inspection and code analysis invariably isolates specific source-level errors. At this point, the greatest barriers to dramatic improvement in code quality are cultural rather than technical. Tools and techniques are available to solve a large portion of all memory faults. The key component to finding success over memory faults is to maintain the attitude that maintaining code at a high level of quality with full memory correctness is, indeed, possible. Resources About the author Cameron is a full-time consultant for Phaseit, Inc.,?
http://www.ibm.com/developerworks/aix/library/au-correctmem/
crawl-001
refinedweb
2,262
52.29
Hello, I have a scrolling background on a Quad. The background scrolling speed will eventually speed up overtime, as the player lives longer. The game is essentially an endless runner, but the player never actually moves, so only the background will scroll to make it look like the player is moving. But because the player doesn't move, I need to get the objects (like collectibles and hazards) to move at the same speed as the background, towards the player. The script for the background scroll is : using UnityEngine; using System.Collections; public class Scroll : MonoBehaviour { public float Speed = 0.1f; // Use this for initialization void Start () { } // Update is called once per frame void Update () { Vector2 offset = new Vector2(Time.time * Speed, 0); GetComponent<Renderer>().material.mainTextureOffset = offset; } } and the script for the Object Movement is: using UnityEngine; using System.Collections; public class ObjectMovement : MonoBehaviour { public GameObject Background; // Use this for initialization void Start () { } // Update is called once per frame void Update () { Scroll scrollScript = Background.GetComponent<Scroll>(); Debug.Log(scrollScript.Speed); GetComponent<Rigidbody2D>().transform.Translate(-scrollScript.Speed, 0, 0); } } So in the ObjectMovement script, I am trying to call the Background scroll speed, and then move the objects at that speed in the negative X direction, which is all working just fine. The problem is that, no matter the speed of the background, the objects always move at the same speed. I could set the background to frozen, or moving super fast, and the objects always move at a constant speed. If anyone could help with this, that would be really great. Thanks! :D Edit: I got the objects to move at different speeds. But the objects move faster than the background. The object always move just a bit faster than the scrolling speed. No matter what the speed is set at. Answer by Second120 · May 24 at 12:42 AM I tackled this problem by using another quad for the object. First, make the quad. It should stretch along the path you want it to take Then, on the material play around with the tiling setting until your object looks right Next, use the same code that you used with your background quad Finally, you should be able to find a formula for the speed quite easily Hope this helped! float x = (Time.time - sTime) * (scrollSpeed * meshRenderer.material.mainTextureScale.x); Vector2 offset = new Vector2(x - startOffset, 0); Here is the code for increase speed after collision? 2 Answers How do i increase auto speed to an object ? 1 Answer Make an object increase speed when entering trigger 2 Answers Sigh... Need help with moving object a distance over time 1 Answer Slow down Rigidbody's speed to a stop 2 Answers
https://answers.unity.com/questions/1195756/get-objects-to-move-relative-to-background-scrolli.html
CC-MAIN-2019-43
refinedweb
452
63.9
JavaScript library and REST reference for Project Server 2013 The JavaScript library and REST reference for Project Server 2013 contains information about the JavaScript object model and the REST interface that you use to access Project Server functionality. You can use these APIs to develop cross-browser web apps, Project Professional 2013 add-ins, and apps for non-Windows devices that access Project Server 2013 and Project Online. Note The JavaScript object model and REST interface align with the Project Server client-side object model (CSOM). They provide equivalent functionality to the Microsoft.ProjectServer.Client namespace in the CSOM. You can access Project Server functionality through the JavaScript object model, which is defined in the PS namespace in the %ProgramFiles%\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\LAYOUTS\PS.js file. The ProjectContext object in the PS namespace is the entry point to the JavaScript object model. Note To browse the JavaScript object model and to help with debugging, you can use the PS.debug.js file in the same directory. To help with development on a remote computer, the Project 2013 SDK download includes the .NET Framework assemblies for the CSOM, and the PS.js and PS.debug.js files. You can also access Project Server functionality through the REST interface. The entry point to the REST interface is the ProjectServer resource, which you access by using the endpoint URI. For example, the following query gets the assignments in the specified project (replace ServerName and pwaName, and change the GUID to match a project).('263fc8d7-427c-e111-92fc-00155d3ba208')/Assignments The ProjectServer resource is described in ProjectServer resources in the REST interface. Other REST resources are described in the documentation for the corresponding JavaScript objects and members in this reference. For more information about using REST, see Client-side object model (CSOM) for Project Server and Programming using the SharePoint 2013 REST service. JavaScript library and REST reference for Project Server - PS.js JavaScript library and REST reference Contains information about the JavaScript object model and the REST interface for Project Server 2013.
https://docs.microsoft.com/en-us/previous-versions/office/project-javascript-api/jj712612(v=office.15)?redirectedfrom=MSDN
CC-MAIN-2019-43
refinedweb
346
55.24
What is CORS?CORS stands for Cross-Origin-Resource-Scripting. This is essentially a mechanism that allows restricted resources (e.g. web services) on a web page to be requested from another domain outside the domain from which the resource originated. The default security model used by JavaScript running in browsers is that unless the server allows it, a service cannot be accessed from a domain other than the domain of the web page. CORS defines a way in which a browser and server can interact to determine whether or not it is safe to allow the cross-origin request. Let us start by developing a simple ASP.NET Web API Code-First Entity Framework application using Visual Studio 2015. We will then see why it is necessary to enable CORS. Thereafter, we shall force authorization on the controller. Since Web API apps are non-visual, we will need to register and login by making API requests to the server app. This tutorial will give us a simple way to address the token exchanges that take place between the client’s jQuery app and the server. We will model the following Student class: We shall first create an ASP.NET Web API web application. 1. Start Visual Studio 2015 2. File >> New >> Project 3. Templates >> Visual C# >> Web >> ASP.NET Web Application (.NET Framework) 4. I named the web application “SchoolAPI”. Go ahead and give it whatever name you fancy. 5. Click OK 6. Under “Select a template” select “Web API” and leave “Host in the cloud” and “Add unit tests” unchecked: 7. Click OK 8. Once the application is created, build it by hitting Shift + Ctrl + B on the keyboard. 9. Run the application by hitting Ctrl + F5. You will see a fully functional ASP.NET Web API app that looks like this: 10. The simplistic API available is this template is served from the ValuesController.cs file. To view its output, add /api/values to the URL address. You will see the following: 11. It is obvious from this message that authentication is enabled on this controller. Let’s disable it. Go to ValuedController.cs and comment out (or delete) [Authorize] on line 10. Recompile your app then refresh the page in your browser. You should see the following XML output: 12. We can force our application to server JSON data. To do this, add the following code to the bottom end of the Register() method in App_Start/WebApiConfig.cs: var json = config.Formatters.JsonFormatter;13. Recompile your app then refresh the page in your browser. You should see the following much simpler JSON output: json.SerializerSettings.PreserveReferencesHandling = Newtonsoft.Json.PreserveReferencesHandling.Objects; config.Formatters.Remove(config.Formatters.XmlFormatter); 14. Now we are ready to code our Student model. Create a Student.cs class file in the Models folder: public class Student {15. Next will need to create an EF Context class. In the Models folder, create another C# class file named SchoolContext.cs. Add to it the following code: public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string Major { get; set; } public DateTime DateOfBirth { get; set; } } public class SchoolContext : DbContext {16. It is now possible for us to create the database using Entity Framework’s code-first paradigm. Go to the “Package Manager Console” by clicking on Tools >> Nuget Package Manager >> Package Manager Console. Execute the following EF migration command in order to create the Configuration.cs file in a Migrations\School folder: public SchoolContext() : base("DefaultConnection") { } public DbSet<Student> Students { get; set; } } enable-migrations -ContextTypeName SchoolContext -MigrationsDirectory Migrations\School 17. You will experience a response that looks like this: PM> enable-migrations -ContextTypeName SchoolContext -MigrationsDirectory Migrations\School Checking if the context targets an existing database... Code First Migrations enabled for project SchoolAPI. 18. The other by-product of what we just did is that you will notice Configuration.cs was created in the Migrations\School directory. In that file is a Seed() method that allows us to enter some dummy data. Let us take advantage of this capability. Add the following method to the Configuration class: public static List<Student> GetSampleStudents() {19. Replace the contents of the Seed() method with the following code:; } context.Students.AddOrUpdate(20. Compile your application. s => new { s.FirstName, s.LastName }, GetSampleStudents().ToArray() ); context.SaveChanges(); 21. The next step is to add a migration. Execute the following command in the “Package Manager” console: add-migration -ConfigurationTypeName SchoolAPI.Migrations.School.Configuration "InitialCreate" Make sure you adjust the fully qualified class namespace in the above command to match your environment. 22. This time the output will look similar to the following: PM> add-migration -ConfigurationTypeName SchoolAPI.Migrations.School.Configuration "InitialCreate" Scaffolding migration 'InitialCreate'. InitialCreate' again. 23. No database has been created yet. The next command should create the database and seed the Students table: update-database -ConfigurationTypeName SchoolAPI.Migrations.School.Configuration 24. This time your output will look similar to this: PM> update-database -ConfigurationTypeName SchoolAPI.Migrations.School.Configuration Specify the '-Verbose' flag to view the SQL statements being applied to the target database. Applying explicit migrations: [201610061908330_InitialCreate]. Applying explicit migration: 201610061908330_InitialCreate. Running Seed method. 25. To view the contents of the database, go to View >> SQL Server Object Explorer. Navigate to your database and you should see the Students table. 26. If you check the contents of the Students table, you will see that it has indeed been populated with our dummy data. 27. Let us create a Web API controller that serves Student data. Right click on: Controllers then select Add followed by Controller. 28. Choose “Web API 2 Controller with actions, using Entity Framework” then click on the Add button. Enter these values in the dialog: Model class: Student Data context class: SchoolContext Controller name: StudentsController 29. Click on Add. This will create StudentsController.cs. Compile your application then point your browser to /api/students. You should see your student data in JSON format. From now on it starts getting very interesting because we are about to deal with the challenges of CORS and authentication. CORSTo experience CORS, let us create another empty ASP.NET empty application to our project. 1. File >> Add >> New Project >> Visual C# >> Web >> ASP.NET Web Application (.NET Framework) >> Name it SchoolClient >> OK 2. Choose Empty then click on OK. This adds an empty ASP.NET project to your solution. 3. We will be needing jQuery and Bootstrap in this new empty project. Therefore, add these libraries using the Nuget Packet Manager. Right-click on the main SchoolClient project node and choose “Manage Nuget Packages…”. Click on Browse in top-left corner and enter jquery in the search box. Choose jQuery then click on the Install button. 4. Do the same for bootstrap. 5. Your SchoolClient project now has both the jQuery & Bootstrap libraries. 6. Add a plain HTML file to the SchoolClient project named index.html. 7. Change contents of the <title> tag to “School Client”. 8. Drag & drop Scripts/jquery-3.1.1.min.js (your version of jQuery may be more recent than mine) and Content/bootstrap.min.css into the <head> section of index.html. Your index.html page will look similar to this: <!DOCTYPE html> <html> <head> <title>School Client</title> <meta charset="utf-8" /> <link href="Content/bootstrap.min.css" rel="stylesheet" /> <script src="Scripts/jquery-3.1.1.min.js"></script> </head> <body> </body> </html> 9. Add the following HTML code into the <body> section of index.html: <div class="container"> <h3>Cors Request</h3> <button id="btnGet" class="btn btn-primary">Get Students</button> <pre id="preOutput"></pre> </div> <script> var baseUrl = ""; $(function () { var getStudents = function () { var url = baseUrl + "api/students/"; $.get(url).always(showResponse); return false; }; var showResponse = function (object) { $("#preOutput").text(JSON.stringify(object, null, 4)); }; $("#btnGet").click(getStudents); }); </script> 10. Run the previous project “SchoolAPI” project so that it shows you the following screen: 11. In index.html, adjust the value of baseUrl so that the port number matches the port number in the above page. 12. Right-click on index.html and choose “View in browser (…)”. This will serve this HTML page in your browser. At this point we should hit F12 in your browser in order to open developer tools. Click on the Console tab. 13. Now click on the “Get Students” button. This should produce an error in the console panel that suggests that our server app does not support CORS. This is because the two sites that we are dealing with have different port numbers and are treated as different domain names. 14. The solution to this problem is not hard. Firstly, we need to add a Nuget package named “Microsoft.AspNet.WebApi.Cors” to our server application. 15. Firstly, add config.EnableCors(); in App_Start/WebApiConfig.cs at the top of the register method. 16. Secondly, add [EnableCors("*", "*", "GET")] just above the class declaration of the StudentController class. This allows all GET verb requests from other domains. 17. Build your solution and refresh index.html. You should now see the students being served from our server application. Authorization tokensLet us add decorate the StudentsController with the [Authorize] annotation. Compile your app then click on the “Get Students” button. As expected, you should receive this message suggesting that authentication is necessary: 1. Since our Web API application has no UI for user registration, our first challenge is to register a user. We will need to do so by sending RESTful requests to the AccountController. Since CORS restrictions apply to the AccountController too, do not forget to add [EnableCors("*", "*", "*")] to this class declaration. 2. Looking further in the AccountController class, you will notice that there is a Register() action method which we will need to post to with Email, Password and ConfirmPassword data items. 3. Add the following registration form right below the <h3> tag in the index.html client project: <div class="row bg-info"> <div class="col-sm-6"> <form id="frmRegister" role="form"> <div class="form-group"> <input type="text" name="Email" placeholder="Email" /> </div> <div class="form-group"> <input type="password" name="password" placeholder="Password" /> </div> <div class="form-group"> <input type="password" name="confirmPassword" placeholder="Confirm Password" /> </div> <div class="form-group"> <input type="button" id="btnRegister" value="Register" class="btn btn-primary" /> </div> </form> </div> <div class="col-sm-6"> </div> </div> 4. Add this jQuery code inside the $(function() { … } block to submit the registration data: var register = function () {5. If you click on the Register button without any data, you will see this validation error message: var url = baseUrl + "api/account/register"; var data = $("#frmRegister").serialize(); $.post(url, data).always(showResponse); return false; }; $("#btnRegister").click(register); 6. Enter real data then click on Register. If you receive "" as a response, then the new user registration is successful. 7. How about if you click on “Get Student”s button? You will get error message: Authorization has been denied for this request. 8. You guessed right. The next challenge is to login with the credentials that we just created. Add the following login form to index.html inside the empty <div class="col-sm-6"> tag: <form id="frmLogin" role="form"> <input type="hidden" name="grant_type" value="password" /> <div class="form-group"> <input type="text" name="userName" placeholder="UserName" /> </div> <div class="form-group"> <input type="password" name="password" placeholder="Password" /> </div> <div class="form-group"> <input type="button" id="btnLogin" value="Login" class="btn btn-primary" /> </div> </form> 9. Add this jQuery code to handle the login process: var login = function () {10. Refresh index.html in your browser. Your page now has a login form. var url = baseUrl + "Token"; var data = $("#frmLogin").serialize(); $.post(url, data).always(showResponse); return false; }; $("#btnLogin").click(login); 11. Enter the email and password you created earlier, then click on Login. You will get a CORS error, just like we did previously. This suggests that the /Token controller is not enabled for CORS. To solve this problem, find the GrantResourceOwnerCredentials() method in /Providers/ApplicationoAuthProvider.cs. Add following code to the bottom of this method: context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" }); 12. Build your solution then try again to login. This time you will get a successful response containing an access_token. We will need to pass on this access_token whenever we make requests for any API service on a secured controller. 13. Add this global variable to save the value of the access_token into it: var accessToken = "";14. Replace the current login() function with this function that saves the token into variable accessToken: var login = function () {15. Next, add these two helper functions just below the global variables: var url = baseUrl + "Token"; var data = $("#frmLogin").serialize(); $.post(url, data) .done(saveAccessToken) .always(showResponse); return false; }; var saveAccessToken = function (data) {16. Replace the getStudents() function with the following code so that the request for student data is also accompanied with the appropriate access_token: accessToken = data.access_token; }; var getHeaders = function () { if (accessToken) { return { "Authorization": "Bearer " + accessToken }; } }; The first function saves the access_token into the accessToken global variable. The second function constructs the Authorization header token value that needs to be sent with each request for API service. var getStudents = function () {17. Refresh index.html in your browser, login again, then click on “Get Students”. You should see the students being displayed. var url = baseUrl + "api/students/"; $.ajax(url, { type: "GET", headers: getHeaders() }).always(showResponse); return false; }; We have successfully registered a user, logged-in with our credentials and were able to read data from a secured controller. Publishing ASP.NET Web API project to Azure1. Right-click on the SchoolAPI Web API project and choose Publish. 2. Click on “Microsoft Azure App Service”. 3. You will need to login into your Azure account. Click on “Microsoft Account” in the top right corner. Select an account or choose “Add an account…”. 4. Click on New… button. 5. Simplify the “Web App Name” such that it is unique when combined with azurewebsites.net. Also, create a resource group if you do not have one already. I already had a resource group so I did not create another. However, I needed to add an “App Service Plan” so I created one as follows: 6. Back in the “Create App Service” dialog, it looked like this: 7. To setup the database on SQL-Azure, click on “Explore additional Azure services”. 8. Click in + icon to add a SQL database on Azure. 9. It is necessary to have at least one database server to host many database instances. To this end, name your database and enter an administrator username and password. Make sure you remember the administrator username and password as you will be needing these credentials over and over again. Also, it is worth giving your database a simpler name. Note that the connection string default name is DefaultConnection, which is indeed what our Web API application is using. 10. Click OK. 11. Click Create. It may take a few minutes to complete the steps needed to create the database. 12. You can click on the “Validate Connection” button to ensure that Visual Studio can connect to your app service. When you click Next you will be taken to the settings tab. Enable both checkboxes “Use this connection string at runtime (update destination web.config)” and “Execute Code First Migrations (runs on application start)”. 13. Click on “Next”. You will be taken to the Preview tab. 14. Click Publish. Take note of the activity in the Output pane in Visual Studio. The application is being deployed to Azure. Upon completion of deployment, the web application will be displayed in your default browser. 15. Back in Visual Studio, edit the index.html file in the SchoolClient project and change the value of the variable baseUrl to point to the URL of the deployed site in Azure instead of localhost. 16. Right-click on index.html and select “View in browser”. Register an account, login, then click on “Get Students”. If you see student data then it is clear that a database was created, database migrations were processed and registration/authentication is working. I hope you had the patience to get this far and benefited from the learning process. References
https://blog.medhat.ca/2016/10/azure-deployment-of-aspnet-web-api-cors.html
CC-MAIN-2021-43
refinedweb
2,701
60.01
Identical binary trees Given two binary trees, check if two trees are identical binary trees? First question arises : when do you call two binary trees are identical? If root of two trees is equal and their left and right subtrees are also identical, then two trees are called as identical binary trees. For example, two trees below are identical. whereas these two trees are not identical as node 6 and 8 differ as well as node 14 and 16. Identical binary trees : Thoughts Solution to the problem lies in the definition of identical tree itself. We check if roots are equal, if there are not, there is no point continue down the tree, just return that trees are not identical. If roots are equal, we have to check is subtrees are equal. To do, so take left subtrees of both original trees and validate if left subtrees are identical too. If not, return negative response. If yes, check if right subtrees are identical too. Did you notice two things? First, that solution to original problem depends on solution of subtrees and at each node problem reduces itself to smaller subproblem. Second, processing order is preorder, we process roots of two trees, then left subtrees and at last right subtree. As Mirror binary search tree and Delete binary search tree are solved using postorder traversal, this problem can be solved using preorder traversal with special processing at root node. We can also solve it by postorder traversal too, in that case, we will be unnecessary scanning subtrees even when roots themselves are not equal. Identical binary trees : example Let’s take an example and see how it works. Start with root nodes, both roots are equal, move down to left subtree. Left node in both subtrees is node(5), which is again equal. Go down the left subtree. Again, left nodes in both subtrees is node(1), move down the left subtrees of both. As per definition, two empty binary trees are identical. So when we move down the left child of nodes, they are null, hence identical and we return true to parent node(1). Same is true for right child. We already know that at node(1), left and right subtree are identical, as well as node values are equal, we return true to parent node(5). Left subtrees of node(5) are identical, move to right subtree to node(6). Similar to node(1), it also return true to parent node. At node(5), left and right subtrees are identical, and also node values are equal, we return true to node(10). Left subtrees are identical, now go to right subtree of node(10) in both trees. Node(14) are equal in both trees, so check left subtrees of node(14) of both trees are identical. Both left subtree and right subtree of node(14) identical, same as node(1) and node(6) in left subtree, so they return true to parent node(14). Now, at node(14), left and right subtrees are identical, so return true up to parent node(10). Now, at root node of both trees, there left subtrees and right subtrees are identical, so we return true for question if these two trees are identical or not. Can you draw non identical binary trees and come up with flow and determine when they will be called out to be non-identical? Identical binary trees : Implementation #include<stdio.h> #include<stdlib.h> #include<math.h> struct node{ int value; struct node *left; struct node *right; }; typedef struct node Node; #define true 1 #define false 0) return createNode(value); if (node->value > value) node->left = addNode(node->left, value); else node->right = addNode(node->right, value); return node; } int isIdenticalBST( Node * firstTree, Node *secondTree){ if( ! (firstTree || secondTree ) ) //both of them are empty return true; if( !( firstTree && secondTree ) ) // one of them is empty return false; return ( firstTree->value == secondTree->value ) && isIdenticalBST( firstTree->left, secondTree->left ) && isIdenticalBST( firstTree->right, secondTree->right ); } /* Driver program for the function written above */ int main(){ Node *firstRoot = NULL; //Creating a binary tree firstRoot = addNode(firstRoot, 30); firstRoot = addNode(firstRoot, 20); firstRoot = addNode(firstRoot, 15); firstRoot = addNode(firstRoot, 25); firstRoot = addNode(firstRoot, 40); firstRoot = addNode(firstRoot, 38); firstRoot = addNode(firstRoot, 45); printf("Inorder traversal of tree is : "); inoderTraversal(firstRoot); printf("\n"); Node *secondRoot = NULL; //Creating a binary tree secondRoot = addNode(secondRoot, 30); secondRoot = addNode(secondRoot, 20); secondRoot = addNode(secondRoot, 15); secondRoot = addNode(secondRoot, 25); secondRoot = addNode(secondRoot, 40); secondRoot = addNode(secondRoot, 38); secondRoot = addNode(secondRoot, 45); printf("Inorder traversal of tree is : "); inoderTraversal(secondRoot); printf("\n"); printf( "Two trees are identical : %s" , isIdenticalBST( firstRoot, secondRoot ) ? "True" :"false"); return 0; } Complexity to find if two trees are identical binary trees is O(n) where n is number of trees in smaller tree. Please share if there is something wrong or missing. If you are willing to contribute and share your knowledge with thousands of learners across the world, please reach out to us communications@algorithmsandme.com
https://algorithmsandme.com/category/data-structures/binary-search-tree/page/2/
CC-MAIN-2019-18
refinedweb
831
60.95
Many C++ Windows programmers get confused over what bizarre identifiers like TCHAR, LPCTSTR are. In this article, I would attempt by best to clear out the fog. TCHAR LPCTSTR In general, a character can be represented in 1 byte or 2 bytes. Let's say 1-byte character is ANSI character - all English characters are represented through this encoding. And let's say a 2-byte character is Unicode, which can represent ALL languages in the world. The Visual C++ compiler supports char and wchar_t as native data-types for ANSI and Unicode characters, respectively. Though there is more concrete definition of Unicode, but for understanding assume it as two-byte character which Windows OS uses for multiple language support. char wchar_t What if you want your C/C++ code to be independent of character encoding/mode used? Suggestion: Use generic data-types and names to represent characters and string. For example, instead of replacing: char cResponse; // 'Y' or 'N' char sUsername[64]; // str* functions with wchar_t cResponse; // 'Y' or 'N' wchar_t sUsername[64]; // wcs* functions In order to support multi-lingual (i.e., Unicode) in your language, you can simply code it in more generic manner: #include<TCHAR.H> // Implicit or explicit include TCHAR cResponse; // 'Y' or 'N' TCHAR sUsername[64]; // _tcs* functions The following project setting in General page describes which Character Set is to be used for compilation: (General -> Character Set) This way, when your project is being compiled as Unicode, the TCHAR would translate to wchar_t. If it is being compiled as ANSI/MBCS, it would be translated to char. You are free to use char and wchar_t, and project settings will not affect any direct use of these keywords. TCHAR is defined as: CHAR #ifdef _UNICODE typedef wchar_t TCHAR; #else typedef char TCHAR; #endif The macro _UNICODE is defined when you set Character Set to "Use Unicode Character Set", and therefore TCHAR would mean wchar_t. When Character Set if set to "Use Multi-Byte Character Set", TCHAR would mean char. _UNICODE Likewise, to support multiple character-set using single code base, and possibly supporting multi-language, use specific functions (macros). Instead of using strcpy, strlen, strcat (including the secure versions suffixed with _s); or wcscpy, wcslen, wcscat (including secure), you should better use use _tcscpy, _tcslen, _tcscat functions. strcpy strlen strcat wcscpy wcslen wcscat _tcscpy _tcslen _tcscat As you know strlen is prototyped as: size_t strlen(const char*); And, wcslen is prototyped as: size_t wcslen(const wchar_t* ); You may better use _tcslen, which is logically prototyped as: size_t _tcslen(const TCHAR* ); WC is for Wide Character. Therefore, wcs turns to be wide-character-string. This way, _tcs would mean _T Character String. And you know _T may be char or what_t, logically. wcs _tcs what_t But, in reality, _tcslen (and other _tcs functions) are actually not functions, but macros. They are defined simply as: #ifdef _UNICODE #define _tcslen wcslen #else #define _tcslen strlen #endif You should refer TCHAR.H to lookup more macro definitions like this. TCHAR.H You might ask why they are defined as macros, and not implemented as functions instead? The reason is simple: A library or DLL may export a single function, with same name and prototype (Ignore overloading concept of C++). For instance, when you export a function as: void _TPrintChar(char); How the client is supposed to call it as? void _TPrintChar(wchar_t); _TPrintChar cannot be magically converted into function taking 2-byte character. There has to be two separate functions: _TPrintChar void PrintCharA(char); // A = ANSI void PrintCharW(wchar_t); // W = Wide character And a simple macro, as defined below, would hide the difference: #ifdef _UNICODE void _TPrintChar(wchar_t); #else void _TPrintChar(char); #endif The client would simply call it as: TCHAR cChar; _TPrintChar(cChar); Note that both TCHAR and _TPrintChar would map to either Unicode or ANSI, and therefore cChar and the argument to function would be either char or wchar_t. cChar Macros do avoid these complications, and allows us to use either ANSI or Unicode function for characters and strings. Most of the Windows functions, that take string or a character are implemented this way, and for programmers convenience, only one function (a macro!) is good. SetWindowText is one example: SetWindowText // WinUser.H #ifdef UNICODE #define SetWindowText SetWindowTextW #else #define SetWindowText SetWindowTextA #endif // !UNICODE There are very few functions that do not have macros, and are available only with suffixed W or A. One example is ReadDirectoryChangesW, which doesn't have ANSI equivalent. ReadDirectoryChangesW You all know that we use double quotation marks to represent strings. The string represented in this manner is ANSI-string, having 1-byte each character. Example: "This is ANSI String. Each letter takes 1 byte." The string text given above is not Unicode, and would be quantifiable for multi-language support. To represent Unicode string, you need to use prefix L. An example: L L"This is Unicode string. Each letter would take 2 bytes, including spaces." Note the L at the beginning of string, which makes it a Unicode string. All characters (I repeat all characters) would take two bytes, including all English letters, spaces, digits, and the null character. Therefore, length of Unicode string would always be in multiple of 2-bytes. A Unicode string of length 7 characters would need 14 bytes, and so on. Unicode string taking 15 bytes, for example, would not be valid in any context. In general, string would be in multiple of sizeof(TCHAR) bytes! sizeof(TCHAR) When you need to express hard-coded string, you can use: "ANSI String"; // ANSI L"Unicode String"; // Unicode _T("Either string, depending on compilation"); // ANSI or Unicode // or use TEXT macro, if you need more readability The non-prefixed string is ANSI string, the L prefixed string is Unicode, and string specified in _T or TEXT would be either, depending on compilation. Again, _T and TEXT are nothing but macros, and are defined as: _T TEXT // SIMPLIFIED #ifdef _UNICODE #define _T(c) L##c #define TEXT(c) L##c #else #define _T(c) c #define TEXT(c) c #endif The ## symbol is token pasting operator, which would turn _T("Unicode") into L"Unicode", where the string passed is argument to macro - If _UNICODE is defined. If _UNICODE is not defined, _T("Unicode") would simply mean "Unicode". The token pasting operator did exist even in C language, and is not specific about VC++ or character encoding.Note that these macros can be used for strings as well as characters. _T('R') would turn into L'R' or simple 'R' - former is Unicode character, latter is ANSI character. ## _T("Unicode") L"Unicode" "Unicode" _T('R') L'R' 'R' No, you cannot use these macros to convert variables (string or character) into Unicode/non-Unicode text. Following is not valid: char c = 'C'; char str[16] = "CodeProject"; _T(c); _T(str); The bold lines would get successfully compiled in ANSI (Multi-Byte) build, since _T(x) would simply be x, and therefore _T(c) and _T(str) would come out to be c and str, respectively. But, when you build it with Unicode character set, it would fail to compile: _T(x) x _T(c) _T(str) c str error C2065: 'Lc' : undeclared identifier error C2065: 'Lstr' : undeclared identifier I would not like to insult your intelligence by describing why and what those errors are. There exist set of conversion routine to convert MBCS to Unicode and vice versa, which I would explain soon. It is important to note that almost all functions that take string (or character), primarily in Windows API, would have generalized prototype in MSDN and elsewhere. The function SetWindowTextA/W, for instance, be classified as: SetWindowTextA/W BOOL SetWindowText(HWND, const TCHAR*); But, as you know, SetWindowText is just a macro, and depending on your build settings, it would mean either of following: BOOL SetWindowTextA(HWND, const char*); BOOL SetWindowTextW(HWND, const wchar_t*); Therefore, don't be puzzled if following call fails to get address of this function! HMODULE hDLLHandle; FARPROC pFuncPtr; hDLLHandle = LoadLibrary(L"user32.dll"); pFuncPtr = GetProcAddress(hDLLHandle, "SetWindowText"); //pFuncPtr will be null, since there doesn't exist any function with name SetWindowText ! From User32.DLL, the two functions SetWindowTextA and SetWindowTextW are exported, not the function with generalized name. User32.DLL SetWindowTextA SetWindowTextW Interestingly, .NET Framework is smart enough to locate function from DLL with generalized name: [DllImport("user32.dll")] extern public static int SetWindowText(IntPtr hWnd, string lpString); No rocket science, just bunch of ifs and else around GetProcAddress! GetProcAddress All of the functions that have ANSI and Unicode versions, would have actual implementation only in Unicode version. That means, when you call SetWindowTextA from your code, passing an ANSI string - it would convert the ANSI string to Unicode text and then would call SetWindowTextW. The actual work (setting the window text/title/caption) will be performed by Unicode version only! Take another example, which would retrieve the window text, using GetWindowText. You call GetWindowTextA, passing ANSI buffer as target buffer. GetWindowTextA would first call GetWindowTextW, probably allocating a Unicode string (a wchar_t array) for it. Then it would convert that Unicode stuff, for you, into ANSI string. GetWindowText GetWindowTextA GetWindowTextW This ANSI to Unicode and vice-versa conversion is not limited to GUI functions, but entire set of Windows API, which do take strings and have two variants. Few examples could be: CreateProcess GetUserName OpenDesktop DeleteFile It is therefore very much recommended to call the Unicode version directly. In turn, it means you should always target for Unicode builds, and not ANSI builds - just because you are accustomed to using ANSI string for years. Yes, you may save and retrieve ANSI strings, for example in file, or send as chat message in your messenger application. The conversion routines do exist for such needs. Note: There exists another typedef: WCHAR, which is equivalent to wchar_t. WCHAR The TCHAR macro is for a single character. You can definitely declare an array of TCHAR. What if you would like to express a character-pointer, or a const-character-pointer - Which one of the following? // ANSI characters foo_ansi(char*); foo_ansi(const char*); /*const*/ char* pString; // Unicode/wide-string foo_uni(WCHAR*); wchar_t* foo_uni(const WCHAR*); /*const*/ WCHAR* pString; // Independent foo_char(TCHAR*); foo_char(const TCHAR*); /*const*/ TCHAR* pString; After reading about TCHAR stuff, you would definitely select the last one as your choice. There are better alternatives available to represent strings. For that, you just need to include Windows.h. Note: If your project implicitly or explicitly includes Windows.h, you need not include TCHAR.H First, revisit old string functions for better understanding. You know strlen: Which may be represented as: size_t strlen(LPCSTR); Where symbol LPCSTR is typedef'ed as: LPCSTR typedef // Simplified typedef const char* LPCSTR; The meaning goes like: Essentially, LPCSTR would mean (Long) Pointer to a Constant String. Let's represent strcpy using new style type-names: LPSTR strcpy(LPSTR szTarget, LPCSTR szSource); The type of szTarget is LPSTR, without C in the type-name. It is defined as: LPSTR typedef char* LPSTR; Note that the szSource is LPCSTR, since strcpy function will not modify the source buffer, hence the const attribute. The return type is non-constant-string: LPSTR. const Alright, these str-functions are for ANSI string manipulation. But we want routines for 2-byte Unicode strings. For the same, the equivalent wide-character str-functions are provided. that can be represented as: size_t wcslen(LPCWSTR szString); Where the symbol LPCWSTR is defined as: LPCWSTR typedef const WCHAR* LPCWSTR; // const wchar_t* Which can be broken down as: Similarly, strcpy equivalent is wcscpy, for Unicode strings: wchar_t* wcscpy(wchar_t* szTarget, const wchar_t* szSource) Which can be represented as: LPWSTR wcscpy(LPWSTR szTarget, LPWCSTR szSource); Where the target is non-constant wide-string (LPWSTR), and source is constant-wide-string. LPWSTR There exist set of equivalent wcs-functions for str-functions. The str-functions would be used for plain ANSI strings, and wcs-functions would be used for Unicode strings. Though, I already advised to use Unicode native functions, instead of ANSI-only or TCHAR-synthesized functions. The reason was simple - your application must only be Unicode, and you should not even care about code portability for ANSI builds. But for the sake of completeness, I am mentioning these generic mappings. To calculate length of string, you may use _tcslen function (a macro). In general, it is prototyped as: size_t _tcslen(const TCHAR* szString); Or, as: size_t _tcslen(LPCTSTR szString); Where the type-name LPCTSTR can be classified as: Depending on the project settings, LPCTSTR would be mapped to either LPCSTR (ANSI) or LPCWSTR (Unicode). Note: strlen, wcslen or _tcslen will return number of characters in string, not the number of bytes. LPTSTR! LPTSTR First, a broken code: int main() { TCHAR name[] = "Saturn"; int nLen; // Or size_t lLen = strlen(name); } On ANSI build, this code will successfully compile since TCHAR would be char, and hence name would be an array of char. Calling strlen against name variable would also work flawlessly. name Alright. Let's compile the same with with UNICODE/_UNICODE defined (i.e. "Use Unicode Character Set" in project settings). Now, the compiler would report set of errors: UNICODE And the programmers would start committing mistakes by correcting it this way (first error): TCHAR name[] = (TCHAR*)"Saturn"; Which will not pacify the compiler, since the conversion is not possible from TCHAR* to TCHAR[7]. The same error would also come when native ANSI string is passed to a Unicode function: TCHAR* TCHAR[7] nLen = wcslen("Saturn"); // ERROR: cannot convert parameter 1 from 'const char [7]' to 'const wchar_t *' Unfortunately (or fortunately), this error can be incorrectly corrected by simple C-style typecast: nLen = wcslen((const wchar_t*)"Saturn"); And you'd think you've attained one more experience level in pointers! You are wrong - the code would give incorrect result, and in most cases would simply cause Access Violation. Typecasting this way is like passing a float variable where a structure of 80 bytes is expected (logically). float The string "Saturn" is sequence of 7 bytes: "Saturn" But when you pass same set of bytes to wcslen, it treats each 2-byte as a single character. Therefore first two bytes [97, 83] would be treated as one character having value: 24915 (97<<8 | 83). It is Unicode character: ?. And the next character is represented by [117, 116] and so on. 97<<8 | 83 ? For sure, you didn't pass those set of Chinese characters, but improper typecasting has done it! Therefore it is very essential to know that type-casting will not work! So, for the first line of initialization, you must do: TCHAR name[] = _T("Saturn"); Which would translate to 7-bytes or 14-bytes, depending on compilation. The call to wcslen should be: wcslen(L"Saturn"); In the sample program code given above, I used strlen, which causes error when building in Unicode. The non-working solution is C-sytle typecast: lLen = strlen ((const char*)name); On Unicode build, name would be of 14-bytes (7 Unicode characters, including null). Since string "Saturn" contains only English letters, which can be represented using original ASCII, the Unicode letter 'S' would be represented as [83, 0]. Other ASCII characters would be represented with a zero next to them. Note that 'S' is now represented as 2-byte value 83. The end of string would be represented by two bytes having value 0. 'S' 83 0 So, when you pass such string to strlen, the first character (i.e. first byte) would be correct ('S' in case of "Saturn"). But the second character/byte would indicate end of string. Therefore, strlen would return incorrect value 1 as the length of string. 1 As you know, Unicode string may contain non-English characters, the result of strlen would be more undefined. In short, typecasting will not work. You either need to represent strings in correct form itself, or use ANSI to Unicode, and vice-versa, routines for conversions. (There is more to add from this location, stay tuned!) Now, I hope you understand the following signatures: BOOL SetCurrentDirectory( LPCTSTR lpPathName ); DWORD GetCurrentDirectory(DWORD nBufferLength,LPTSTR lpBuffer); Continuing. You must have seen some functions/methods asking you to pass number of characters, or returning the number of characters. Well, like GetCurrentDirectory, you need to pass number of characters, and not number of bytes. For example: GetCurrentDirectory TCHAR sCurrentDir[255]; // Pass 255 and not 255*2 GetCurrentDirectory(sCurrentDir, 255); On the other side, if you need to allocate number or characters, you must allocate proper number of bytes. In C++, you can simply use new: new LPTSTR pBuffer; // TCHAR* pBuffer = new TCHAR[128]; // Allocates 128 or 256 BYTES, depending on compilation. But if you use memory allocation functions like malloc, LocalAlloc, GlobalAlloc, etc; you must specify the number of bytes! malloc LocalAlloc GlobalAlloc pBuffer = (TCHAR*) malloc (128 * sizeof(TCHAR) ); Typecasting the return value is required, as you know. The expression in malloc's argument ensures that it allocates desired number of bytes - and makes up room for desired number of characters. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) int main(int argc, CHAR* argv[]) { General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/76252/What-are-TCHAR-WCHAR-LPSTR-LPWSTR-LPCTSTR-etc?msg=4456447
CC-MAIN-2015-40
refinedweb
2,900
52.39
Happy New Year! 2008-12-31 Changes of this week: - Added access to collision contacts of Players. - Fixed minor bug in UnigineScript with namespaces and user-defined classes. - Refactored Actor system. - LightSpot performance optimization. - User classes in UnigineScript can be stored as keys in maps. - Character class divided into Character, CharacterParticipant, and CharacterNpc. - Fixed missing Windows SDK issue in install.py script. - Updated reference manual (Console, UnigineScript library). Current Unigine codebase statistics: engine 6611 Kb 672 files 237312 lines shaders 1148 Kb 288 files 41467 lines scripts 162 Kb 15 files 5862 lines editor 1221 Kb 96 files 42125 lines tools 765 Kb 61 files 27704 lines app main 62 Kb 28 files 2238 lines app samples 238 Kb 62 files 9871 lines build system 39 Kb 29 files 1770 lines build scripts 59 Kb 91 files 1736 lines docs scripts 30 Kb 7 files 963 lines total 10339 Kb 1349 files 371048 lines Happy New Year! Previous:Cinematic DoF and fast lights
https://developer.unigine.com/devlog/20081231-happy-new-year
CC-MAIN-2020-45
refinedweb
163
65.32
Law of Large Numbers and the Central Limit Theorem (With Python) Want to share your content on python-bloggers? click here. Statistics does not usually go with the word “famous,” but a few theorems and concepts of statistics find such universal application, that they deserve that tag. The Central Limit Theorem and the Law of Large Numbers are two such concepts. Combined with hypothesis testing, they belong in the toolkit of every quantitative researcher. These are some of the most discussed theorems in quantitative analysis, and yet, scores of people still do not understand them well, or worse, misunderstand them. Moreover, these calculations are usually not done manually—the datasets are too large, computations are time consuming—so it is equally important to understand the computation aspect of these theorems well. A working knowledge of probability, random variables and their distributions is required to understand these concepts. Sample Mean A “sample” is a set of outcomes in an experiment or an event. A sample can sometimes also be replaced by “trials” or number of repetitions of the experiment. For example, tossing a coin with probability p of achieving a heads is n Bernoulli (p) trials. If the outcome is a random variable X, then X~Binomial(n, p) distribution (with n number of Bern(p) trials). The values in a sample, \(X_{1},X_{2},X_{3}, …, X_{n}\), will all be random variables, all drawn from the same probabilistic distribution since they are outcomes of the same experiment. The \(X_{1},X_{2},X_{3}, …, X_{n}\) here are not actual numbers but names of random variables. A realization of the random variable \(X_{1}\) will be \(x_{1}\), an actual number. A realization or an actual value of an experiment is one reading from the distribution. In a way, a probabilistic distribution is not a physical thing, and hence sometimes hard to relate to in everyday actions, even though highly applicable in everyday activities. It can be thought of as a tree. For example, a mango tree. We know the characteristics of the tree, what types of leaves it has, what types of fruits, their shape, size, color etc. We have a good idea of what a mango tree is, even if we don’t physically see it & know each and every value. That is similar to a probabilistic distribution of a random variable X. One specific leaf, x, plucked from a mango tree, is like a realization of the random variable X. Sample Mean is a random variable itself, as it is an average of other random variables. When referring to outcomes of the same experiment, all outcomes will belong to the same distribution, and hence will be identical in distribution. If each trial or sample is independent of the others, the random variables \(X_{1},X_{2},X_{3}, …, X_{n}\), will also be independent. This is then a set of I.I.D. (Independent and Identically Distributed) random variables. For I.I.D. random variables \(X_{1},X_{2},X_{3}, …, X_{n}\) the sample mean \({\overline{X_n}}\) is simply the arithmetic average of the set of random variables. (Note: the definition of sample mean applies to any set of random variables, but the fact that they are I.I.D. is going to be a special case scenario in common experiments, useful for deriving some important theorems.) \[{\bar{X_n}} = \sum_{i=1}^{n}{X_i}/n\] In a small lab experiment, like measuring the length of an instrument with Vernier Calipers, we normally observe and record 3-5 readings of a measurement, and take the average of the readings to report the final value, to cancel any errors. This is high-school level mathematics. In research experiments, this level of simplification is not possible. But the idea behind the averaging is the same. In real life, we draw samples, Xi, and take the expectation of the samples to get the expectation of the sample mean. But the expectation of the sample mean is an average of all possible outcomes for each random variable Xi, not just the realized values. So, in a sense, it is a theoretical average. \[E[{\overline{X_n}}] = E[\sum_{i=1}^{n}{X_i}/n]\] When we take the expectation, the expectation of the errors becomes zero: E[e] = 0 Convergence in Probability A convergence in probability is defined as follows: a sequence of random variables Xn is said to converge in probability to a number a, if for any given small number ϵ, the probability of the difference between Xn and a being greater than ϵ tends to zero as n approaches ∞. For any \(\epsilon\) > 0, \[\lim_{n \rightarrow \infty } P(|X_n – a| \geq \epsilon ) = 0\] So, the distribution of Xn bunches around the number a for a large enough number n. But it is not always necessary that the expectation E[Xn] will converge to a too. This can be explained by the presence of outliers which might offset the expectation away from the number a, where the big proportion of the outcomes lie. Law of Large Numbers (LLN) As per the LLN, as the sample size n tends to ∞, the expectation of the sample mean tends to the true mean μ of the population with probability 1. This is true for a set of I.I.D. random variables Xi with mean μ and variance σ2. It is calculated as follows: \[E[{\overline{X_n}}] = E[\sum_{i=1}^{n}{X_i}/n] \longrightarrow \mu \] This can be simulated and tested in Python by creating say 15 random variables, X1 to X15 that are Xi ~Bin(n,p) using the random generator of Numpy. The Xi must be IID. We calculate the value of the sample mean by averaging the variables. The true mean ( mu in the code) is very close to the calculated value mean based on the randomly generated distributions. Note that in Numpy, np.random.binomial(n, p, size=None) uses a slightly different notation for the Binomial distribution than what we have been using so far. Here n refers to the number of trials in one variable, p is the probability of success, and size is the sample size (eg, number of coins tossed). It treats the Binomial distribution as a sum of indicator random variables; hence the output is the sum of number of successes for each sample (like each coin). So, if we take size as 5, and n (trials) as 100, the output will be a list of 5 numbers, with the sum of number of successes out of 100 for each sample (eg, for each coin). For the sake of simplicity though, I have created 15 separate random variables, each with size= 1, for illustrative purposes. XN refers to the sample mean in the code. import numpy as np import scipy.stats as sp #Running the Simulation with 15 IID Binomial RV for size=1 each, with n=1000 trials, probability of success is p=0.5 X1 = np.random.binomial(1000, 0.5, 1) X2 = np.random.binomial(1000, 0.5, 1) X3 = np.random.binomial(1000, 0.5, 1) X4 = np.random.binomial(1000, 0.5, 1) X5 = np.random.binomial(1000, 0.5, 1) X6 = np.random.binomial(1000, 0.5, 1) X7 = np.random.binomial(1000, 0.5, 1) X8 = np.random.binomial(1000, 0.5, 1) X9 = np.random.binomial(1000, 0.5, 1) X10 = np.random.binomial(1000, 0.5, 1) X11 = np.random.binomial(1000, 0.5, 1) X12 = np.random.binomial(1000, 0.5, 1) X13 = np.random.binomial(1000, 0.5, 1) X14 = np.random.binomial(1000, 0.5, 1) X15 = np.random.binomial(1000, 0.5, 1) XN = (X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8+ X9 + X10 + X11 + X12 + X13 + X14 + X15)/15 #Sample Mean mean = np.mean(XN) #Calculated mean of the sample print("Sample Mean: "+ str(mean)) mu = sp.binom.mean(1000, 0.5) #True Mean of the sample print("True Mean: " + str(mu)) Output: Sample Mean: 500.8666666666667 True Mean: 500.0 This is the result for just 15 random variables. As the number increases, the sample mean gets closer to true mean. (Note: every time you run the code, it gives a new value for the sample mean because a new set of rv is generated every time. You can check and run the code here. ) The variance of the Sample Mean \(\overline{X_n}\) is calculated as follows: \[Var(\overline{X_n}) =\frac{Var(X_1 + X_2 + … + X_n)}{n^2} = \frac{n \sigma^2}{n^2} = \frac{\sigma^2}{n} \] Since Xi are independent, we can use the property of linearity of variances to find the variance of the sample mean. By using the variance calculated above, and the Chebyshev’s inequality, we can prove the Weak Law of Large Numbers. Weak Law of Large Numbers As per the Chebyshev’s inequality, For any \(\epsilon\) > 0, \[P(|Y_n – a| \geq \epsilon ) = \frac{Var(Y_n)}{ \epsilon ^2} \] Plugging in the values in this equation, we get: \[ P(|\overline{X_n} – \mu| \geq \epsilon ) = \frac{\sigma^2}{ n\epsilon ^2} \underset{n \longrightarrow \infty }{\overset{}{\longrightarrow}} 0 \] As n approaches infinity, the probability of the difference between the sample mean and the true mean μ tends to zero, taking ϵ as a fixed small number. Central Limit Theorem So far, we have not mentioned anything about which distribution the Xi belong to and the distribution of the sample mean (which is a random variable too, remember?). Most of the times, knowing the mean is not enough; we would like to know more about the final distribution of the sample mean so we can understand its properties. The Central Limit Theorem describes exactly this. The Central Limit Theorem (CLT) says: \[ \frac{\sqrt{n} (\overline{X_n} – \mu )}{ \sigma } \underset{n \longrightarrow \infty }{\longrightarrow}N(0,1) \] import numpy as np import scipy.stats as sp import matplotlib.pyplot as plt import math #Running the Simulation with 10 IID Binomial RV for 500 coins, with 1000 trials, probability of success is 0.5 X1 = np.random.binomial(1000, 0.5, 500) X2 = np.random.binomial(1000, 0.5, 500) X3 = np.random.binomial(1000, 0.5, 500) X4 = np.random.binomial(1000, 0.5, 500) X5 = np.random.binomial(1000, 0.5, 500) X6 = np.random.binomial(1000, 0.5, 500) X7 = np.random.binomial(1000, 0.5, 500) X8 = np.random.binomial(1000, 0.5, 500) X9 = np.random.binomial(1000, 0.5, 500) X10 = np.random.binomial(1000, 0.5, 500) XN = (X1+X2+X3+X4+X5+X6+X7+X8+X9+X10)/10 mean = np.mean(XN) #Calculated mean of the sample print("Sample Mean: "+ str(mean)) mu = sp.binom.mean(1000, 0.5) #true mean of the sample print("True Mean: " + str(mu)) sigma = sp.binom.std(1000, 0.5) print("True standard deviation: " + str(sigma)) #Plotting Sample mean distribution for CLT gauss = np.random.standard_normal(5000) nsigma = 1 nmu = 0 size=10 N = math.sqrt(10) ZN = (N*(XN-mu))/ sigma plt.figure(figsize=(10,7)) count, bins, ignored = plt.hist(gauss, 100, alpha=0.5, color='orange', density=True) plt.plot(bins, 1/(nsigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - nmu)**2 / (2 * nsigma**2) ), linewidth=3, color='r') plt.hist(ZN, 100, alpha=0.7, color="Navy", density=True) plt.legend(["Std Normal PDF","Standard Normal Distribution", "Sample Mean Distribution"], loc=2) plt.savefig("CLT_new.jpeg") plt.show() You can test and run the code from here. Note that “n” is still not very large here. As “n” gets bigger, the results get more and more close to the Standard Normal distribution. This is another example of the proof of the theorem in Python, taking random variables with a Poisson distribution as the sample (figure below). As per the Central Limit Theorem, the distribution of the sample mean converges to the distribution of the Standard Normal (after being centralized) as n approaches infinity. This is not a very intuitive result and yet, it turns out to be true. The proof of the CLT is by taking the moment of the sample mean. The moment converges to the moment of the Standard Normal (Refer to this lecture for a detailed proof). Since the proof involves advanced statistics and calculus, it is not covered here. It is important to remember that the CLT is applicable for 1) independent, 2) identically distributed variables with 3) large sample size. What is “large” is an open ended question, but ~32 is taken as an acceptable number by most people. The larger the sample size, the better, for this purpose. In some cases, even if the variables are not strictly independent, a very weak dependence can be approximated to independence for the purpose of analysis. In nature, it may be difficult to find complete independence, as there may be some externalities which are unaccounted for, but if there is no clear, strong dependence, then it is usually approximated to independence. That said, the researcher must proceed with caution and weigh the situation on its own merits. The Standard Normal distribution has many nice properties which simplify calculations and give intuitive results for experimental analysis. The bell curve is symmetric, centered around zero, has standard deviation = 1, is unimodal and is overall easy to read. The Z tables make it easy to relate the CDF with the values. No wonder that CLT finds wide applications in research, exploratory data analysis and machine learning. References: Law of Large Numbers and Central Limit Theorem | Statistics 110 John Tsitsiklis, and Patrick Jaillet. RES.6-012 Introduction to Probability. Spring 2018. Massachusetts Institute of Technology: MIT OpenCourseWare,. License: Creative Commons BY-NC-SA. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2020/11/law-of-large-numbers-and-the-central-limit-theorem-with-python/
CC-MAIN-2022-40
refinedweb
2,294
56.76
We'll need to share your messages (and your email address if you're logged in) with our live chat provider, Drift. Here's their privacy policy. If you don't want to do this, you can email us instead at contact@anvil.works. Do you find yourself wanting to store more than just strings, numbers and dates in your tables? Perhaps you have complex records with dynamic fields, or perhaps you want to store lists of data without making a whole new table.: Now you can store structured data in a single row of your table: app_tables.people.add_row( name = "Kermit the Frog", contact_details = { 'phone': [ {'type': 'work', 'number': '555-555-5555'}, {'type': 'home', 'number': '555-123-4567'} ] }) Querying them is just as simple. If you supply an object to the Data Tables search() or get() method, you’ll match every value that contains those values, dictionary keys or list items. Here’s an example: def get_person_for_number(phone_number): # We have a phone call for this number. Who is it? return app_tables.people.get( contact_details = {'phone': [{'number': phone_number}]} ) You can read more about Simple Object columns in the Data Tables reference docs.: After that, the card is on file, and we can charge them at the click of a button. (We also note our profit margin for future reference.): You can grab the source code yourself – click below to open the apps in the Anvil editor: Want to dig into how these apps work? Our tutorials show you all the pieces these apps are made from, and more.. Follow along as we start from a single stock chart and build up to a full subscription service. You can also copy the source code and explore it yourself:
https://anvil.works/blog/page-5.html
CC-MAIN-2019-22
refinedweb
286
72.36
I have a question in c++ , can you help me ,and tell me where is the proplem in my solution ? The question is: Assume that the maximum number of students in a class is 50. Write a program that reads students' names followed by their test score from a file and outputs the following: a. class average b. Names of all the students whose test scores are below the class average, with an appropriate message c. Highest test score and the names of all the students having the highest score You can use this file : in.txt Ahmed 60.8 Mona 87.3 Ali 77.1 Mahmood 97.9 Isa 63.1 Zainab 100 MY SOLUTION : #include<iostream> #include<string> #include<fstream> using namespace std; int main() { int count = 0; float avrg; int maxIndex=0; float sum=0; float scores[50]; string names[50]; ifstream infile; infile.open("in.txt"); while (!infile.eof() && (count <=50)) { infile>>names[count]>>scores[count]; count++; sum+=scores[count]; } avrg= sum/count; cout<<avrg; if (scores[50]<avrg) cout<<names[count]; cout<<endl; maxIndex=scores[0]; if (scores[count]>maxIndex) maxIndex =count; cout <<names[count]; infile.close(); return 0; }
https://www.daniweb.com/programming/software-development/threads/104091/please-help-me-i-want-it-for-tomorrow
CC-MAIN-2018-43
refinedweb
194
76.93
Your Account by Chris Adamson This blog comes from two different places. The first is my decade-long general indifference to really using the clipping rectangle, which in Java2D is a shape that indicates what part of a Graphics object is to be painted in. It offers the potential for graphics optimizations because the rendering code can assume that anything outside of the clipping area is unchanged, meaning it can be copied from a back buffer, not repainted, etc., depending on context. But I got burned the first time I played with the clipping rectangle (back in Java 1.0, it wasn't an arbitrary shape...). Dutifully typing in programs from Laura Lemay's Teach Yourself Java in 21 Days, I found that the Mac JDK 1.0 mis-implemented the clipping rectangle. Every time you set it, the resulting clipping rect would be the intersection of your rect and the existing clipping rect. This meant that setClip() calls could only make the clipping rectangle smaller, and ultimately making it impossible to draw to your Graphics. Not good. So, I was a little gun-shy about clipping after that, even after Sun was chased away from the Mac and the job of writing Mac Java runtimes fell to Roaster, Metrowerks, and ultimately Apple. setClip() Graphics Fast forward to today. I seemingly can't shake this insane idea I have about writing a kick-ass podcast editor with QuickTime for Java. Insane because I don't have enough time to take care of my editing responsibilities for ONJava and java.net as it is. I guess I'm really itching to code something, and I think QTJ's editing API's would really help move such a project along quickly. One of the things I worried about, though, is the idea of the waveform viewer. This is a custom component that represents the samples graphically, which requires (among other things) looking up a bunch of samples to determine how high to draw the bar. Yeah, there's already one of these in Swing Hacks (courtesy of contributor Jonathan Simon), but this needs to be different because it won't necessarily be able to load all the audio into RAM (or, at the very least, I want to just be able to call a QuickTime sound media's getSample() and not care how the data gets to me). After all, at CD quality of 44,100 samples a second, a one-hour podcast would have 44100 samples/sec * 60 sec/min * 60 min/hr * 1 hr = 158,760,000 samples. If you zoom into a detail view of the wave-form, all the way down to the individual sample level (i.e., one pixel per sample), that component's going to get mighty big. getSample() That got me thinking that a component that's tens of thousands, or even millions of pixels wide, would not necessarily be a good thing to put in a JScrollPane. After all, if you have a custom component, with a custom paintComponent() method, you'll do all your painting, blissfully unaware of whether a given pixel is going to be seen through the scroll pane's viewport or not, and if you're showing just a few hundred horizontal pixels of something thousands of times wider, that'll be wasteful. In fact, it will be intolerably expensive if you have to do a lot of work (e.g., looking up samples) in order to draw pixels that won't even be shown. JScrollPane paintComponent() That brings me back to the clipping rectangle. Graphics, an instance of which is handed to your rendering code in paintComponent(), has a getClip() method. I wondered to myself if this represented the sub-section of the Graphics that would actually be seen, i.e., the portion in the scroll-pane's viewport. If so, there's a potential for optimization: before you go to draw something, look to see if any part of it is in the clip area. If not, don't bother painting it. getClip() Here's a piece of code to exercise this approach. It creates a custom component, PaintyThing, that consists of a horizontal series of numbered red boxes. By default, it draws 10,000 100x200 boxes, meaning the component is one million pixels wide. That should be an appropriately brutal test of the rendering optimzation. PaintyThing import java.awt.*; import javax.swing.*; public class ScrollClip extends JFrame { public static final int PAINTY_HEIGHT=200; public static final int PAINTY_WIDTH_INCREMENT=100; public static final int PAINTY_COUNT = 10000; public static boolean useClipOptimization; static { String useClipPref = System.getProperty ("use.clip.optimization"); useClipOptimization = (useClipPref != null) && useClipPref.equalsIgnoreCase ("true"); if (! useClipOptimization) System.out.println ("Run with -Duse.clip.optimization=true "+ "to use cliprect optimization"); } JScrollPane pain; PaintyThing paintyThing; public ScrollClip () { super ("Scroll Clip"); paintyThing = new PaintyThing (PAINTY_COUNT); pain = new JScrollPane (paintyThing, ScrollPaneConstants.VERTICAL_SCROLLBAR_AS_NEEDED, ScrollPaneConstants.HORIZONTAL_SCROLLBAR_ALWAYS); getContentPane().add (pain); } public static void main (String[] args) { ScrollClip sc = new ScrollClip(); sc.pack(); // force a reasonable size so swing doesn't really try // to show the whole paintyThing sc.setSize (PAINTY_WIDTH_INCREMENT * 3, sc.getSize().height + 1); sc.setVisible(true); } class PaintyThing extends JComponent { int paintyCount = 1; Dimension calcSize = null; Stroke boxStroke = null; public PaintyThing (int i) { super(); paintyCount = i; calcSize = new Dimension (i * PAINTY_WIDTH_INCREMENT, PAINTY_HEIGHT); boxStroke = new BasicStroke (3); } // not sure if these are useful. habit. public Dimension getPreferredSize() {return calcSize;} public Dimension getMinimumSize() {return calcSize;} public Dimension getMaximumSize() {return calcSize;} public void paintComponent (Graphics og) { long inTime = System.currentTimeMillis(); Graphics2D g = (Graphics2D) og; g.setColor (Color.white); g.fillRect (0, 0, getWidth(), getHeight()); // System.out.println (g.getClip()); Rectangle clipRect = (g.getClip() instanceof Rectangle) ? (Rectangle) g.getClip() : null; g.setStroke (boxStroke); for (int i=0; i<paintyCount; i++) { Rectangle paintBox = new Rectangle (i * PAINTY_WIDTH_INCREMENT, 0, PAINTY_WIDTH_INCREMENT - 4, PAINTY_HEIGHT); // performance boost - don't paint if no part of the // rect to paint is in the current clipping region if (useClipOptimization && (clipRect != null) && (! paintBox.intersects (clipRect))) { // System.out.println ("don't paint " + i); continue; } // System.out.println ("paint " + i); g.setColor (Color.red); g.draw (paintBox); g.setColor (Color.blue); g.drawString (Integer.toString (i), (i * PAINTY_WIDTH_INCREMENT) + 10, PAINTY_HEIGHT/2); } System.out.println ("paintComponent(): " + (System.currentTimeMillis() - inTime)); } } } The ScrollClip class is mostly just responsible for putting a PaintyThing in a scroll pane, then putting that in a reasonably-sized JFrame and putting it on screen. It also uses a static initializer to look to see if you set a system property, use.clip.optimization, to enable the optimization. ScrollClip JFrame use.clip.optimization Look closely at the optimization, because that's what matters. As it loops through the rectangle coordinates, it asks for any given rectangle "Am I doing optimization, does the Graphics have a cliprect, and is this rectangle I'm about to draw completely outside of the cliprect? If so, skip to the next rectangle." Here's what the app looks like: If you notice, I left in some println's to show how long it takes to get through paintComponent(). Run with java ScrollClip and you'll find that scrolling is somewhat unresponsive. That's because the repaints are taking about half a second each (on a dual 1.8 G5 Mac): java ScrollClip paintComponent(): 354 paintComponent(): 389 paintComponent(): 348 paintComponent(): 445 paintComponent(): 389 paintComponent(): 351 Run again with java -Duse.clip.optimization=true ScrollClip to use the "paint only if necessary" optimization. Notice how much more responsive this version is. You can zip the scrollbar back and forth with impunity because the repaint time is consistently less than 10 ms. java -Duse.clip.optimization=true ScrollClip paintComponent(): 5 paintComponent(): 4 paintComponent(): 8 paintComponent(): 5 paintComponent(): 6 paintComponent(): 4 Not bad for scrolling around a million-pixel wide component. Better than I expected, actually. Looking at paintComponent(), I can also see right away two further optimizations that could be made. Do you see them? If you push the number of pixels higher, you see a fairly consistent two-orders-of-magnitude advantage to the optimization. Use 100,000 rectangles, and the optimized repaints will be in the 10's of milliseconds (which is still great). As you go higher still, you approach two limits: the repainting is slow even with the optimization, and the width of PaintyThing approaches Integer.MAX_VALUE, which is the end of the party. Integer.MAX_VALUE My Swing Hacks co-author, now a member of Sun's Swing team, thinks this is something of an edge-case, since not that many people are creating custom components, and particularly not of this extreme size. He may be right; though I've created a lot of custom components in my work, they've never been millions of pixels wide until now. In fact, I've assumed that I would probably need to drop the scroll pane and manage my own JScrollBar, dropping the fiction of the single component that shows the whole waveform and instead having one that just renders the part of the waveform indicated by the zoom level and the scrollbar. Still, it's nice to see that that's potentially an optimization to do later, and that use of the clipping rectangle could provide performance that's good enough to use in early prototyping. JScrollBar It also makes me think I should have been more aggressive about using clipping in my previous Swing work. Maybe the payoff won't be big, but my sense of Swing performance is that there's no single thing that slows down Swing, but rather that you die the "death of a thousand papercuts" from a deep hierarchy of less-than optimal layout and rendering choices. Examining the clip before you paint, and setting it yourself when you paint non-scrolling components, is a simple optimization that could basically be reduced to a standard refactoring step, and one that might score you some performance. By the way, did you find the additional optiizations in PaintyThing.paintComponent() that I mentioned? PaintyThing.paintComponent() © 2014, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/post/a_scrolly_clippy_swing_optimiz.html
CC-MAIN-2014-41
refinedweb
1,683
54.83
# Starting My Collection of Bugs Found in Copy Functions ![memcpy](https://habrastorage.org/r/w1560/getpro/habr/post_images/741/90d/4d9/74190d4d9f31774d50d91381d3f3f536.png) I've already noticed a few times before that programmers seem to tend to make mistakes in simple copy functions. Writing a profound article on this topic is going to take quite a while since I'll have to do some thorough research and sample collecting, but for now I'd like to share a couple of examples I stumbled upon recently. The Baader-Meinhof phenomenon? I don't think so ----------------------------------------------- As a member of the PVS-Studio team, I come across numerous bugs found with our tool in various projects. And as a DevRel, I love telling people about that :). Today I'm going to talk about incorrectly implemented copy functions. I saw such functions before, but I never wrote them down as I didn't think they were worth mentioning. But since I discovered the tendency, I can't but start collecting them. For a start, I'm going to show you two recently found specimens. You may argue that two cases don't make a tendency yet; that I paid attention solely because they occurred too close in time and the [Baader-Meinhof](https://science.howstuffworks.com/life/inside-the-mind/human-brain/baader-meinhof-phenomenon.htm) phenomenon kicked in. *The Baader-Meinhof phenomenon, also called Frequency Illusion, is a cognitive bias where a person stumbles upon a piece of information and soon afterwards encounters the same subject again, which makes them believe this subject appears exceptionally frequently.* I don't think that's the case. I already had a similar experience with poorly written comparison functions, and my observation was later proved by real examples: "[The Evil within the Comparison Functions](https://www.viva64.com/en/b/0509/)". Okay, let's get to the point. That introduction was a bit too long for a brief note on two examples :). Example 1 --------- In the [article](https://www.viva64.com/en/b/0721/) about the check of the Zephyr RTOS, I mentioned a failed attempt to make a function that should work like *strdup*: ``` static char *mntpt_prepare(char *mntpt) { char *cpy_mntpt; cpy_mntpt = k_malloc(strlen(mntpt) + 1); if (cpy_mntpt) { ((u8_t *)mntpt)[strlen(mntpt)] = '\0'; memcpy(cpy_mntpt, mntpt, strlen(mntpt)); } return cpy_mntpt; } ``` PVS-Studio diagnostic message: [V575](https://www.viva64.com/en/w/v575/) [CWE-628] The 'memcpy' function doesn't copy the whole string. Use 'strcpy / strcpy\_s' function to preserve terminal null. shell.c 427 The analyzer says the *memcpy* function copies the string but fails to copy the terminating null character, which is a very strange behavior. You may think the copying of the terminating null takes place in the following line: ``` ((u8_t *)mntpt)[strlen(mntpt)] = '\0'; ``` But that's wrong – this is a typo that causes the terminating null to get copied into itself. Notice that the target array is *mntpt*, not *cpy\_mntpt*. As a result, the *mntpt\_prepare* function returns a non-terminated string. This is what the code was actually meant to look like: ``` ((u8_t *)cpy_mntpt)[strlen(mntpt)] = '\0'; ``` I don't see any reason, though, for implementing this function in such a complicated and unconventional way. Because of this overcomplicating, what should have been a small and simple function ended up with a critical bug in it. This code can be reduced to the following: ``` static char *mntpt_prepare(char *mntpt) { char *cpy_mntpt; cpy_mntpt = k_malloc(strlen(mntpt) + 1); if (cpy_mntpt) { strcpy(cpy_mntpt, mntpt); } return cpy_mntpt; } ``` Example 2 --------- ``` void myMemCpy(void *dest, void *src, size_t n) { char *csrc = (char *)src; char *cdest = (char *)dest; for (int i=0; i ``` We didn't catch this one; I came across it on StackOverflow: [C and static Code analysis: Is this safer than memcpy?](https://stackoverflow.com/questions/55239222/c-and-static-code-analysis-is-this-safer-than-memcpy) Well, if you check this function with PVS-Studio, it will expectedly issue the following warnings: * [V104](https://www.viva64.com/en/w/v104/) Implicit conversion of 'i' to memsize type in an arithmetic expression: i < n test.cpp 26 * [V108](https://www.viva64.com/en/w/v108/) Incorrect index type: cdest[not a memsize-type]. Use memsize type instead. test.cpp 27 * [V108](https://www.viva64.com/en/w/v108/) Incorrect index type: csrc[not a memsize-type]. Use memsize type instead. test.cpp 27 Indeed, this code has a flaw, and it was pointed out in the replies on StackOverflow. You can't use a variable of type *int* as an index. In a 64-bit program, an *int* variable would most certainly (we don't talk about exotic architectures now) be 32 bits long and the function would be able to copy only as much as INT\_MAX bytes, i.e. not more than 2 GB. With a larger buffer to copy, a signed integer overflow will occur, which is undefined behavior in C and C++. By the way, don't try to guess how exactly the bug would manifest itself. Surprisingly, it's quite a complicated topic, which is elaborated on in the article "[Undefined behavior is closer than you think](https://www.viva64.com/en/b/0374/)". The funniest thing is that the code shown above was written in an attempt to eliminate some warning of the Checkmarx analyzer triggered by a call of the *memcpy* function. The wisest thing the programmer could come up with was to reinvent the wheel. But the resulting copy function – however simple – ended up flawed. The programmer probably made the things even worse than they had already been. Rather than try and find the cause of the warning, he or she chose to conceal the issue by writing their own function (thus confusing the analyzer). Besides, they made a mistake of using an *int* variable as a counter. And yes, code like that may not be optimizable. Using a custom function instead of the existing efficient and optimized function *memcpy* is not an efficient decision. Don't do that :) Conclusion ---------- Well, it's only the start of the journey, and it might well take me a few years before I collect enough examples to write a profound article on this topic. Actually, it's only now that I'm starting to keep an eye for such cases. Thanks for reading, and be sure to try [PVS-Studio](https://www.viva64.com/en/pvs-studio/) on your C/C++/C#/Java code – you may find something interesting.
https://habr.com/ru/post/495640/
null
null
1,082
54.52
Last Updated on A question I get asked a lot is: How can I make money with machine learning? You can get a job with your machine learning skills as a machine learning engineer, data analyst or data scientist. That is the goal of a great many people that contact me. There are also other options. In this post, I want to highlight some of those other options and try to get your gears turning. There are a great many opportunities given the vast amounts of data available, you just need to think about and discover the valuable questions. These are the questions to which people and businesses will pay to have answered. Machine Learning for Money Photo by 401(K) 2013, some rights reserved Impact-First Let’s start with some methodology before we dive into example domains. Like any other machine learning problem, you are following the process of applied machine learning, but you are selecting a domain and question where there is a market to have the question answered. - Start with a question in a domain (define your problem well). Choose a question based on the impact it has on the domain. Here, impact may be a return. Play thought experiments with an idealized model that can make perfect predictions. - Collect the data you need to address the question (data selection). - Clean and prepare the data in order to make it suitable for modeling (data preparation) - Spot check algorithms on the problem. Be sure to start with the simplest possible models and use them as a baseline. - Tune the best performing models and use methods like thresholding and ensembles to get the most out of the models you have selected (improve results). - Present the results or put the system into operations and set up close watches (present results). Ideally, the more accurately you address the question, the larger the return (or the bigger the bets you can make). Your Startup If you have your own business or web startup, then you should look very hard at leveraging the data you are already collecting. It is not uncommon to run a number of data collection services in a web startup, like KissMetrics, Google Analytics and many others. How can this data be used to affect your bottom line? In my experience, this is more data analyst work than machine learning work, but you can always bust out a regression model and see if it offers more lift than a simple quintile model. We covered this a little in the previous post. Nevertheless, here are some ideas of areas you could look into: - Customer Conversions: Model the features of customers that convert or don’t convert. - Up-sell and Cross-sell: Model the features of customers that convert on up-sell or cross-sell offers. - Acquisition Strategies: Model the value of customers by their acquisition strategies. - Retention Strategies: Model the ROI of customer retention strategies. - Customer Churn: Model the features of customers that churn or don’t churn. Start with the impact on the bottom-line and work backward to the questions you need to ask in order to make decisions. Once you can answer the question and make a prediction for a given new customer, spend time devising and testing intervention strategies you can use to influence or capitalize on the prediction. Development You may be a developer or programmer that knows how to design, create and release software. Think about valuable questions online that you could answer with machine learning methods. Are there predictions or recommendations you can make that are valuable? Some examples off-the-cuff that come to mind include a variety of publicly available social media data: - Kick Starter: Model the features of a successful or unsuccessful kick starter campaign. - Social Media Profile: Model the features of a successful social media profile (visits or page rank) on sites like LinkedIn, Google+ or Facebook. - Social News: Model the features of a successful post to a social news site like Hacker News or Reddit. - Sales Page: Model the features of a successful product sales page like for e-commerce or information products. Making money from insights into social media data is a crowded space. If you want to take this idea seriously, you are going to have to get really creative with the features you use to model the problem. The feature engineering will be your contribution, perhaps more than actual models. This approach will most likely require the collection and processing of awkward datasets. These are datasets that are not tidy matrices of features. The modeling process would be to characterize the desirable outcome first, assess it’s predictive power and then make predictions that you offer to clients. Finance and Gambling The obvious options for making money with machine learning are finance and gambling. I am reticent to suggest these areas. I think they can are very likely dangerous sirens. Like Venus flytraps, they lure programmers and machine learning practitioners and digest them. Apply Machine Learning to the Stock Market, but with care. Photo by Iman Mosaad, some rights reserved. The benefit is that the decisions are very clear (which horse will win or which stock to buy/sell) and you can deploy your own capital behind the decisions. I’d suggest modeling problems that are easy for you to understand, some financial instruments can be very complex. I’ve had my toe in high-frequency trading and portfolio optimization. It can be scary stuff, also thrilling. I recommend paper trading for a while, there are great APIs you can use for your data source. See How I made $500k with machine learning and HFT (high-frequency trading) and Financial Applications of Machine Learning. Also, you might want to take a look at Quantopian, I have not tried out any gambling problems, but I have used some of the methods such as rating systems that I expect to feature heavily in the literature. Consider looking into horse and dog racing, sports betting (2 player games) and card games like poker. Race yourself. Focus on the problem, collect the data and quickly define some baseline results. Your goal is to improve upon your own best results and to leverage anything and everything that might help. Your goal is not to outperform domain experts, at least not for a long time. Competitions You can make money by participating in machine learning competitions. I my advice is that the cash prizes not be the primary motivation for participation in competitions. You can make a lot more money by finding consulting clients directly. Nevertheless, top competitors can win cash prizes. Some places you can find machine learning competitions include: Competitions can be an amazing opportunity for learning, testing and improving your skills. There is typically a lot of information sharing on these sites and you can find out what algorithms and tools are hot. Summary We touched on four areas that you could think about in order to make money from machine learning: your own business, from social data, finance and gambling and competitions. The best method that I have found is to find people with problems that you can answer with easily accessible data (i.e. consulting). Do you make money using machine learning or do you have some ideas for making money with machine learning? As important as knowing your ML will be your knowledge of the particular sport you tackle. Too many ML practitioners dip their toe into the ML sports betting world without really knowing much about the sport in question or just as importantly about betting. Too often they think that the main objective is pick the most likely winner of say a horse race and then set out to select factors that they think will contribute most to this outcome. They are then surprised to find that their model does little better than the market itself. They also tend to give greater time to ML methods than actual factor selection when in actual fact the latter is the most important. The problem is their skills are in ML and the kudos is in demonstrating how much you know about ML. My advice would be to learn as much about the sport and its betting first or buddy up with someone who does if you are the ML expert. A very well article written, Jason. I totally agree with Mark Littlewood. You got to put your data in place before it can do anything good. The data is the life of any ML model. I am also an ML expert but i also have SQL skills and data analysis experience that i can put into my ML expertise so as to leverage them. However, i know that ML is a great tool for the future. Even if one doesn’t know about data analysis, there are plenty of stuff out there online that can help you with getting the right data. There are public APIs that can be the sources of important data. II am always impressed by Machine Learning and know that its future is really bright and the people who are interested in becoming ML experts will make money like hell in the times to come. Thanks Ankush. Hi Jason !! 🙂 🙂 🙂 Quite interesting is machine learning topic for a guy like me, i want to learn this to earn much money with it. Do you think it would be a good idea to create a membership site in my native language spanish to earn money with it ? I also find fascinating the robotics topics as well, what do you suggest me to do ?…. I want to start an online business basically and earn money with it…but have a little troubles deciding which topic to cover to the spanish market ?….. So my questions for you are these ones, 1. do you consider a membership site is the best option to earn money with these topics (machine learning, artificial intelligence and robotics ) ? 2. Do you consider is better to focus my efforts only on a single topic or can i cover all of them ? 3. What about a blog or a youtube channel ? 4. what about Online ebooks ? Thanks for your time, really appreciate your input ! 🙂 🙂 🙂 Have a blessed week !! 🙂 🙂 Cheers, From Dominican Republic Kreemly I don’t know about membership sites, sorry. Focus on what interests you the most. I don’t know about youtube vs blogs. I don’t know about online ebooks sorry. Hi Jason, that’s a great article. I find it very helpful for the people, who are making their minds about going into ML fields. I am a health expert and epidemiologist and I am wondering how is the future of ML in my filed. Also I always get questions like, as there are many statistical methods available, what is the advantage of ML over them. Whats its advantage in health data and so on. I would be really grateful if you could answer my curiosity. 🙂 Thank you for this great initiative! Impossible to answer in the general case. For a given problem, use the methods that give the best results, statistical, ml or otherwise. Thanks a lot . Will you describe which subjects or topics I need to know for machine learning from beginning to expert level Yes, start here:
https://machinelearningmastery.com/machine-learning-for-money/
CC-MAIN-2019-47
refinedweb
1,883
63.09
For many years there has been the only way to write client-side logic for the web; JavaScript. WebAssembly provides another way, as a low-level language similar to assembly, with a compact binary format. Go, a popular open source programming language focused on simplicity, readability and efficiency, recently gained the ability to compile to WebAssembly. Here we explore the possibilities with writing and compiling Go to WebAssembly, from installation of Go, to the compilation to WebAssembly, to the communication between JavaScript and Go. If you’re not familiar with Go, please start with the Go documentation and introductory tutorial to get an overall understanding of the language. What is WebAssembly? WebAssembly, (sometimes abbreviated to WASM, or in its logo WA) is a new language that runs in the browser, alongside JavaScript. One of its interesting features is that it can become a target for compilation from multiple other languages. The benefit of Web Assembly is that it can run at near-native speeds, providing an environment to run intensive or performance critical code. The language is not designed to replace JavaScript but exist alongside with the ability to talk between the two. At its heart, WebAssembly is a low-level language. It can be considered similar to traditional assembly, with granular memory management and a binary format that can be compiled as fast as it downloads to produce code that’s guaranteed to run at speeds close to native applications. It’s very low level and designed to be a target for compiler writers rather than written by hand. This makes it an excellent target for higher level languages like C, C++, Rust, and now: Go. Getting Up and Running with Go WebAssembly as a compile target only became available in Go 1.11. Even if you have Go installed, it’s worth checking with go version, which will print out the installed version. The way in which we install Go will be dependent on your platform, and unfortunately, it is not possible to cover. Here as an example we install Go 1.11 on macOS and Ubuntu using bash. One way to install on macOS is to use Homebrew, the macOS package manager, to get Go 1.11. The following commands should allow you to install Go: echo 'export GOPATH="${HOME}/.go"' >>~/.bashrc echo 'export GOROOT="$(brew --prefix golang)/libexec' >>~/.bashrc echo 'export PATH="$PATH:${GOPATH}/bin:${GOROOT}/bin"' >>~/.bashrc test -d "${GOPATH}" || mkdir "${GOPATH}" source ~/.bashrc brew install go go version Here we add the necessary environment variables, GOPATH, GOROOT to your PATH. Here GOPATH is the location of your workspace (where you write your Go code). GOROOT points to the directory where Go gets installed. The test -d command will check that the GOPATH exists, and if not will create it. The source line will rerun the anything in ~/.bashrc into the current shell, so you don’t need to restart. Let’s have a look at how you would do the same on Ubuntu: echo 'export GOPATH=$HOME/go' >>~/.bashrc echo 'export GOROOT=/usr/local/go' >>~/.bashrc echo 'export PATH=$GOPATH/bin:$GOROOT/bin:$PATH' >>~/.bashrc cd /tmp wget sudo tar -xvf go1.11.linux-amd64.tar.gz sudo mv go /usr/local source ~/.bashrc go version At the end, we should see the version printed, which should be 1.11. Your First WebAssembly Program from Go Code Go has strong opinions on where your code should be, the workspace which gets located at GOPATH. This path would be $HOME/go on Linux and Mac if you followed the directions above. First, we will create a workspace directory; this should be $HOME/go on Linux and Mac. To use a different workspace you can do so by setting the GOPATH environment variable. Go also has strong opinions on how the workspace should get arranged. Let’s assume we’re going to make a package called webassembly. The $GOPATH directory should have a folder named src containing the package folder in webassembly. Finally, we are going to want to make a file with the following path: $GOPATH/src/webassembly/webassembly.go. Next, for the actual code for this file we’ll write a simple ‘Hello World’ application: package main import "fmt" func main() { fmt.Println("Hello, World!") } We can then compile this Go file to WASM using the following command: GOOS=js GOARCH=wasm go build -o main.wasm Here GOOS is the target ‘operating system’ which in this instance is JavaScript, and GOARCH is the architecture which in this case is WebAssembly (wasm). Running this command will compile the program to a file called `main.wasm` in the folder. You might notice the build size is quite large (~2mb). While this could be prohibitive depending on your application, this should significantly improve in the future as more work goes into the Go wasm target and the underlying wasm standards gain additional functionality. Using Your Compiled WebAssembly in the Browser To call our compiled Go code, we need to make use of a library provided in the Go source called wasm_exec.js. We can find this in the GOROOT folder under /misc/wasm/wasm_exec.js. The easiest thing to do here is simply copy-and-paste it to where we want to use our WASM file. For simplicity, let’s assume we’re going to write our index.html file (which hosts the code for our web page) where will load our WASM file, in the same folder as where we wrote our Go file ( $GOPATH/src/webassembly/webassembly.go). Let’s go ahead and see what that would look like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <script src="wasm_exec.js"></script> <script> if (WebAssembly) { // WebAssembly.instantiateStreaming is not currently available in Safari if (WebAssembly && !WebAssembly.instantiateStreaming) { // polyfill WebAssembly.instantiateStreaming = async (resp, importObject) => { const source = await (await resp).arrayBuffer(); return await WebAssembly.instantiate(source, importObject); }; } const go = new Go(); WebAssembly.instantiateStreaming(fetch("main.wasm"), go.importObject).then((result) => { go.run(result.instance); }); } else { console.log("WebAssembly is not supported in your browser") } </script> </head> <body> <main id="wasm"></main> </body> </html> Here the wasm_exec.js file gives us access to the new Go() constructor which allows us to run our code. The WebAssembly global is accessible in supported browsers. We use the instantiateStreaming method, which is polyfilled for unsupported browsers (currently Safari out of the big four browser vendors). WebAssembly.instantiateStreaming is the most efficient way to load WASM code, compiling and instantiating a module for us to use. We can combine it with the fetch API to retrieve our WASM file. The result then gets passed to our constructed Go object that lets us run our compiled code. Before we load our page and try out the code, we will need to run it on a web server to avoid CORS issues. Here I would recommend http-server, a Node.js package which provides a fully functioning web server. If you were determined to stay in the Go ecosystem to serve your files, the Go documentation recommends doing so using goexec by running goexec 'http.ListenAndServe(":8080", http.FileServer(http.Dir(".")))' . If we save this page, then open our developer console, we should see Hello, WebAssembly printed to out. Interacting with JavaScript and the DOM You can call JavaScript from WASM using the syscall/js module. Let’s assume we have a function in JavaScript simply called updateDOM that looks like this: function updateDOM(text) { document.getElementById("wasm").innerText = text; } All this function does is set the inner text of our main container to whatever gets passed to the function. We can then call this function from our Go code in the following fashion: package main import ( "syscall/js" ) func main() { js.Global().Call("updateDOM", "Hello, World") } Here we use the js.Global function to get the global window scope. We call the global JavaScript function updateDOM by using Call method on the value returned from js.Global. We can also set values in JavaScript using the Set function. At the moment setting values works well with basic types but errors on types such as structs and slices. Here we’ll pass some basic values over to JavaScript, and show how you could use a simple workaround to marshal a struct into JSON by leveraging JavaScript’s JSON.parse. package main import ( "encoding/json" "fmt" "syscall/js" ) type Person struct { Name string `json:"name"` Age int `json:"age"` } func main() { js.Global().Set("aBoolean", true) js.Global().Set("aString", "hello world") js.Global().Set("aNumber", 1) // Work around for passing structs to JS frank := &Person{Name: "Frank", Age: 28} p, err := json.Marshal(frank) if err != nil { fmt.Println(err) return } obj := js.Global().Get("JSON").Call("parse", string(p)) js.Global().Set("aObject", obj) } We can also use Set to bind these values to callbacks within Go, using the NewCallback method. Let’s say we want to set a method in JavaScript, bind it to a Go function and make it call a method when it’s called. We could do that like this: package main import ( "fmt" "syscall/js" ) func sayHello(val []js.Value) { fmt.Println("Hello ", val[0]) } func main() { c := make(chan struct{}, 0) js.Global().Set("sayHello", js.NewCallback(sayHello)) <-c } Here we create a channel of length zero and then await values (which never arrive) keeping the program open. This allows the sayHello callback to get called. Assuming we had a button which calls the function entitled sayHello, this would, in turn, call the Go function with whatever argument gets passed in, printing the answer (i.e., ‘Hello, World’). We can also use the Get method to retrieve a value from the JavaScript main-thread. For example, let’s say we wanted to get the URL of the current page. We could do that by doing the following: import ( "fmt" "syscall/js" ) func main() { href := js.Global().Get("location").Get("href") fmt.Println(href) } Which would print out the webpage URL to the web console. We can extrapolate this to get hold of any global JavaScript object, like document or navigator for example. Conclusion In this post, we have seen how to get the version of Go necessary to compile WebAssembly, how to structure your files, and how to compile a simple Go program to WebAssembly. We have also taken this further and demonstrated how to set JavaScript variables from Go (and in turn the DOM), set Go variables from JavaScript and also set Go callbacks in JavaScript. The true value of WebAssembly here is to do heavy lifting operations that we may normally do in something like a Web Worker. There are a few examples of such programs across the web, including this A Star path search algorithm, calculating factorials, a fully fledged Gameboy Color emulator (written in Go), or this video effects application are good examples of where the near-native speeds of WebAssembly shine. Generally, any time we are considering heavy computation in the browser WebAssembly may be a good choice. Unfortunately, as we have to proxy DOM updates to JavaScript, it is unlikely that DOM heavy code would see much benefit from using WebAssembly. Having said this, WebAssembly provides another tool in the web developers arsenal, allowing them to unlock near-native performance for certain tasks.
https://www.sitepen.com/blog/compiling-go-to-webassembly/
CC-MAIN-2019-09
refinedweb
1,883
64.51
So I can't figure out how to change my result into an exact decimal. My problem is that the tan function returns 1 instead of like 1.37etc...My problem is that the tan function returns 1 instead of like 1.37etc...Code:#include <cmath> #include <iostream> #include <cstdlib> #include <iomanip> #define Pi double(3.14159) using namespace std; void Switch(double x, double y, double z, char Type) { int h = 0; switch (Type) { case 'e': cout << "\n\tThe Area of the Equilateral is = " << 1.0/2.0 * x * sqrt(x*x -(x/2)*(x/2)); break; case 's': cout << "\n\tThe Area of the Square is = " << x * x; break; case 'p': h =tan(54 * Pi/180); cout << "\n\tThe Height is = " << h; break; default: cout << "\n\tinvalid entry"; } } I tried making them all into double by puting .0 at end of each number, i.e. 180.0. That didn't work I also tried removing the #define Pi and putting instead in my code, double Pi = 3.14159. That didn't work either. Any simple suggestions? I am only in a beginning c++ class mind you so the simpler the suggestion the better.
https://cboard.cprogramming.com/cplusplus-programming/135376-loss-data-trig-function.html
CC-MAIN-2017-30
refinedweb
195
72.97
I am stuck on trying to figure out how to insert a vector for a value in a map. For example: #include <iostream> #include <vector> #include <map> using namespace std; int main() { map <int, vector<int> > mymap; mymap.insert(pair<int, vector<int> > (10, #put something here#)); return 0; } {1,2} Basically your question is not about inserting std::vector into a std::map. Your question is how can you easily create an anonymous std::vector with arbitrary initial element values. In ISO C++03, you can't. C++11 allows using initialization lists for this, however. If you are stuck with a C++03 compiler, you possibly could create a helper function to return a vector with specified elements: std::vector<int> make_vector(int a, int b) { std::vector<int> v; v.push_back(a); v.push_back(b); return v; } If the vectors you're inserting are of different sizes, you could use a variadic function, although doing so would require that you either pass along the number of elements or have a reserved sentinel value.
https://codedump.io/share/AVKhG4h5ihOB/1/insert-vector-for-value-in-map-in-c
CC-MAIN-2017-47
refinedweb
176
52.09
std::cos(std::complex) From cppreference.com Computes complex cosine of a complex value z. [edit] Parameters [edit] Return value If no errors occur, the complex cosine of z is returned. Errors and special cases are handled as if the operation is implemented by std::cosh(i*z), where i is the imaginary unit. [edit] Notes The cosine is an entire function on the complex plane, and has no branch cuts.Mathematical definition of the cosine is cos z = [edit] Example Run this code #include <iostream> #include <cmath> #include <complex> int main() { std::cout << std::fixed; std::complex<double> z(1, 0); // behaves like real cosine along the real line std::cout << "cos" << z << " = " << std::cos(z) << " ( cos(1) = " << std::cos(1) << ")\n"; std::complex<double> z2(0, 1); // behaves like real cosh along the imaginary line std::cout << "cos" << z2 << " = " << std::cos(z2) << " (cosh(1) = " << std::cosh(1) << ")\n"; } Output: cos(1.000000,0.000000) = (0.540302,-0.000000) ( cos(1) = 0.540302) cos(0.000000,1.000000) = (1.543081,-0.000000) (cosh(1) = 1.543081)
https://en.cppreference.com/w/cpp/numeric/complex/cos
CC-MAIN-2019-13
refinedweb
177
54.83
[ ] Konstantin Shvachko updated HDFS-2280: -------------------------------------- Attachment: checksumException.patch In the patch I # set {{imageDigest}} to null in {{BackupStorage.reset()}} to make sure the old digest is not interfering with newly loaded images. # and populate {{imageDigest}} obtained from NN to BN's {{imageDigest}} before the new image is loaded on BN in order to ensure it is verified against the expected value. > BackupNode fails with MD5 checksum Exception during checkpoint if BN's image is outdated. > ----------------------------------------------------------------------------------------- > > Key: HDFS-2280 > URL: > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.22.0 > Reporter: Konstantin Shvachko > Assignee: Konstantin Shvachko > Fix For: 0.22.0 > > Attachments: checksumException.patch > > > If BN starts after NN made changes to namespace it fails with MD5 checksum Exception during checkpoint when it reads new image upload from NN. This is happening because {{imageDigest}} is not reset to null, but keeps the value of the originally loaded BN image. -- This message is automatically generated by JIRA. For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201108.mbox/%3C757452121.11836.1314233009054.JavaMail.tomcat@hel.zones.apache.org%3E
CC-MAIN-2014-23
refinedweb
165
50.02
Bug #15121closed Memory Leak on rb_id2name(SYM2ID(sym)) Description @ohler55 mentioned in that calling rb_id2name(SYM2ID(sym)) seems to lock symbols in memory but I couldn't find any issue open for that. So I'm just opening one just in case, but pls close if this is a dupe. I created a sample C extension to reproduce this #include "extconf.h" #include <stdlib.h> #include <stdio.h> #include <ruby.h> VALUE rb_leak_sym(VALUE self, VALUE argument1) { const char *sym = rb_id2name(SYM2ID(argument1)); return Qnil; } void Init_testsym() { rb_define_global_function("leak_sym", rb_leak_sym, 1); } We can see it leaking memory with this snippet require "testsym" require "objspace" def record_allocation GC.start GC.start puts "Before - Objects count: #{ObjectSpace.each_object.count}" puts "Before - Symbols count: #{Symbol.all_symbols.size}" yield GC.start GC.start puts "After - Objects count: #{ObjectSpace.each_object.count}" puts "After - Symbols count: #{Symbol.all_symbols.size}" end def leak_symbols 1_000_000.times.each { |i| leak_sym("string_number_#{i}".to_sym) } end record_allocation do leak_symbols end Output: $ ruby -v test.rb ruby 2.4.4p296 (2018-03-28 revision 63013) [x86_64-darwin17] Before - Objects count: 8784 Before - Symbols count: 3063 After - Objects count: 2008786 After - Symbols count: 1003063 Updated by jeremyevans0 (Jeremy Evans) almost 4 years ago I believe this is expected. Once you ask for the ID of a dynamic symbol, Ruby needs to create a real symbol, and once the real symbol is created, it can never be garbage collected. You can replace your leak_sym method with the following and observe the same behavior (symbols that appear in ruby code are also real symbols): def leak_sym(sym) eval sym.inspect end Updated by ko1 (Koichi Sasada) almost 4 years ago - Status changed from Open to Rejected Yes, it is expected. We separate all symbols into two categories: - static symbols: used by method name and so on. They are not GC'ed managed. - dynamic symbols: created by Ruby program like str.to_sym. Static symbols are used as method name and so on. And SYM2ID() makes static symbols from dynamic symbols. This is an implementation limitation and we expect this limitation is appropriate for most of Ruby programs. Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/15121
CC-MAIN-2022-27
refinedweb
356
51.14
I know how to write some 100 lines C, but I do not know how to read/organize larger source like Rebol. Somewhere was a tutorial with hostkit and dll, but it seems R3 is now statically linked. So I do not know where to look. How would I write a native which gets a value and returns another? Where to put it in the source? What to obey, like telling the GC I created something in C? Also, how can I embed R3 in other programms, to call it from Python or Node? I ask for Python/Node part comes later. But my learning-main should access R3 in a similar way. Means dll. And are there some typical hooks for startup/shutdown etc in such ffi's? [Edit: forgot to mention: it is for Rebol 3.] That's two questions. :-) Regarding the first (about adding a native)... it's probably best elaborated as developer documentation on a Wiki. Questions in the rebol tag should generally be more about the language from a user's point of view. The answers regarding source code itself typically will be long, and there's really only a few people who can answer it, or care about the answer (what StackOverflow calls "too localized"). And it will wind up being more of a c question at that point, if anything. So telling the developers to get their act together and write that documentation and put it in a centrally organized place is probably the best idea! :-P But I actually did try this myself. I added a set-env native to set environment variables from the interpreter, and you can look at the diffs it took to do that in GitHub for a bit of an idea. add SET-ENV, tweaks to host api for environment string handling An important thing to remember is that when you touch certain files, you have to run make prep which does a lot of scanning and code generation automatically. Adding a native is one of those things where you're definitely going to have to do that, each time you change to an interface that fundamental. As for your second question, which is more of a user-facing question about interpreter embedding, one of the first places to look is just in how the simple REPL is implemented. There are two versions in the main repository right now, one for Posix and one for Windows. So a string goes in and a string comes out. But there are more complicated forms of interaction, and you can get to them from reb-host.h, which includes these files from src/include #include "reb-config.h" #include "reb-c.h" #include "reb-ext.h" // includes reb-defs.h #include "reb-args.h" #include "reb-device.h" #include "reb-file.h" #include "reb-event.h" #include "reb-evtypes.h" #include "reb-net.h" #include "reb-filereq.h" #include "reb-gob.h" #include "reb-lib.h" So you can look through those files for the API as it existed at the moment of the December-12th open-sourcing. Things will be evolving, and hopefully much better documented. But for now that's what seems to be available. You could link the host kit as a shared/DLL or static library, it depends on your build settings and what compiler you use. Similar Questions
http://ebanshi.cc/questions/1774504/for-rebol3-want-to-get-started-with-native-extensions-on-linux-how-do-i-write
CC-MAIN-2017-04
refinedweb
563
76.22
Hello friends, I would like to calculate area and volume of cylinder but radious and height should be integer, float and double. How can i do? May you help me? First, do you know the formula for calculating the surface area and volume in terms of radius and height? Why do you want to do it for integer, float and double? Why not just double? All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on. 2kaud, First of all thank for respond. I know formula of them but i am trying to do polymorphism. For instance with a uniq calculate(); function i am trying to do tree options. But i dont know how to do? Polymorphism is to do with derived classes from a base class. So what's your base class definition and what are your derived classes? You are right should be with inheritance Actually, My teacher didnt teach inheritance yet. I think question can be without inheritance. What is your advise? Thank you. Solve it without using inheritance and the associated polymorphism then. Since you say you know the formula involved, what's stopping you now? C + C++ Compiler: MinGW port of GCC Build + Version Control System: SCons + Bazaar Look up a C/C++ Reference and learn How To Ask Questions The Smart Way Kindly rate my posts if you found them useful It might be useful if you said exactly what your assignment is because if you are trying to use polymorphism then you use base and derived classes with inheritance. So if your teacher hasn't covered this yet it's unlikely that you are expected to use polymorphism. Originally Posted by melissax I think question can be without inheritance. There are at least four different kinds of polymorphisms available in the C++ type system. Only one is associated with inheritenace and overriding. In your case the most likely kind of polymorphism is coersion, also called casting. It's when different types "slide" into each other depending on context. For example an int into a double or the other way around. There are rules for what's allowed and not and sometimes the programmer need to be explicit about it by using a cast statement. Otherwise you may get a warning and even an error. This usually happens when the target type is "narrower" than the source type so information may get lost. Last edited by nuzzle; April 27th, 2013 at 10:54 AM. ?? Last edited by 2kaud; April 28th, 2013 at 10:45 AM. Personally, I see two ways to do this: method overloading, or templates: Code: class cylinder_calc { public: static int get_volume(int r, int h){ ... } static float get_volume(float r, float h){ ... } static double get_volume(double r, double h){ ... } }; double r = 1.0; double h = 1.0; double v = cylinder_calc::get_volume(r, h); or templates: Code: template <typename T> class cylinder_calc { public: static T get_volume(T r, T h){ ... } }; double r = 1.0; double h = 1.0; double v = cylinder_calc<double>::get_volume(r, h); class cylinder_calc { public: static int get_volume(int r, int h){ ... } static float get_volume(float r, float h){ ... } static double get_volume(double r, double h){ ... } }; double r = 1.0; double h = 1.0; double v = cylinder_calc::get_volume(r, h); template <typename T> class cylinder_calc { public: static T get_volume(T r, T h){ ... } }; double r = 1.0; double h = 1.0; double v = cylinder_calc<double>::get_volume(r, h); Ninja Thank you for answer, I think i couldnt explain question clearly; As a question; 1. r and h have to input from keyboard but you gave value them and as double. 2. r and h can input from keyboard and may be integer, double or float so that calculation of volume() and area() can give permission to the polymorphism. 3.Area() and Volume() functions must be declared as a class of member functions. I would like to thank to the nuzzle and 2kaud too. I might be useful if you posted what I asked in post #7. If you need to input r and h from the keyboard as either int, float or double, then you have three choices. a) Ask the user whether they are to input values as int, float or double and then get input into variables of the appropriate type. b) Assume input is double and get the input as double and then inspect the value entered to determine whether it is actually integer or float. c) Get the input as a string and then inspect it to determine whether it is int, float, double or a bad number and convert to the approriate type. Once you have the input entered into variables of the appropriate type, then you can use the class that ninja9578 suggested in post #11 which overloads the get_volume function depending upon the type of parameters used. I would suggest that you post your code here for further guidance. 2kaud i come back i had middterms so i couldnt come.And i want to start subject again with my study. I have a question related with function overloading. I will calculate area and volume of cylinder with respect to this rules: a)Radius and height will be input from keyboard. b)Radius and height can be real numbers,float or integer for that reason data input will give permission different style of numbers so that will be use poliymorphism. c)Calculation of area and volume functions must belong to the class they must be member functions. #include<iostream> #include<conio.h> using namespace std; class cylinder{ public: int r,h,x; int area(); double area(); float area(); int volume(); float volume(); double volume(); }s; int cylinder::area(){ return 3.14*r*r; } int cylinder::volume(){ return 3.14*x*x*h; } float cylinder::area(){ return 3.14*r*r; } float cylinder::volume(){ return 3.14*x*x*h; } double cylinder::area(){ return 3.14*r*r; } double cylinder::volume(){ return 3.14*x*x*h; } int main(){ cout <<"\n"; cout <<"*************** MENU ******************\n"; cout <<"\n"; cout <<"Please select option:\n"; cout <<"1.Area of Cylinder\n:"; cout <<"2.Volume of Cylinder:\n"; cout <<"7.Exit\n"; cout <<"Enter radius\n"; cin>>r; cout <<"Enter height\n"; cin>>h; cout<< "Area =" << s.area() << "\n" cout<< "Volume=" << s.volume() << "\n"; getch(); return 0; } I am not sure my way is true i am getting error. May you help me.Thank you You're guessing without understanding and that will never work. Why do you have functions with the same name but different return types? C++ doesn't support that. Forum Rules
http://forums.codeguru.com/showthread.php?537169-In-order-to-use-the-functionality-of-the-base-class&goto=nextnewest
CC-MAIN-2015-14
refinedweb
1,111
67.25
When I saw MSBuild for the first time I thought - "Yeah, good improvement over the standard makefile - we now have XML (surprise!!) and it is extensible". The real import of word “extensible” struck home when I started writing custom tasks. The concept of being able to call an army of reusable objects from a makefile – well we’ll start giving them more respect and start calling them proj files - is quite empowering. But (ahh there’s the but) there’s one pain point. Writing a task means writing a new class which means more testing, maintenance etc etc. And what if all you wanted to do was to say hey! (Well I can’t really think of 101 reasons to say “hey” in a program … but you get the point). So I wrote this object that executes C# code passed to it as a parameter. So now to greet someone all you need to do is: <ExecuteCode Code =""Wotcher!"" /> Yeah I know the " spoils the effect L You could also get something back from the object. Say you wanted a property in your proj file that had the value of current time: <ExecuteCode Code ="DateTime dt = DateTime.Now; ReturnValue = dt.ToString();"> <Output TaskParameter ="OutputString" PropertyName ="Time" /> </ExecuteCode> Now $(Time) can be used like any other property in your proj file. I had hard-coded a few standard assemblies and “using directives” for ease of use of the object. For the sake of extensibility I take in the assemblies to be referenced and the namespaces that’ll figure in the “using directives” as parameters of the Task. So if you are strictly against hard-coding anything you could take them as parameters like this: ReturnValue = dt.ToString();" UsingDirectives ="System" ReferenceAssemblies="System.dll"> <Output TaskParameter ="OutputString" PropertyName ="CurrentDate" /> Behind the Scenes ExecuteCode is an object that adds some boilerplate code to the parameter that gets passed in, compiles it and then uses reflection to load up the assembly and execute the method. Compiling CSharp code is rather easy: CSharpCodeProvider csc = new CSharpCodeProvider(); CompilerParameters cp = new CompilerParameters(); cp.GenerateInMemory = true; string program = ConstructProgram(); CompilerResults cr = csc.CompileAssemblyFromSource(cp, program); ConstructProgram() is a function that adds the boilerplate code to the code that was passed in (like using directives, class name, method name, closing braces) We can also set the assemblies to be referenced in the CompilerParameters object like this cp.ReferencedAssemblies.Add("System.dll"); Arggh! Errors!! //Run if (cr.Errors.HasErrors == false) { Assembly assembly = cr.CompiledAssembly; Object o = assembly.CreateInstance(ClassName); Type t = o.GetType(); MethodInfo mi = t.GetMethod(MethodName); OutputString = (string)mi.Invoke(o, null); returnValue = true; } else foreach (CompilerError err in cr.Errors) { Log.LogError("(" + err.Line + "," + err.Column + "): " + err.ErrorText); } Log.LogMessage(program); returnValue = false; If you don’t have any errors in the code that you passed in(fat chance!) then load the assembly and invoke the method or else spew out the errors. Giving something back Also notice that the method need not be Main and hence can have any signature. The ConstructProgram() method adds a “return ReturnValue” statement. So to return something from your code you just need to set the ReturnValue variable. Notice that the output property OutputString is set to the return value of the method.
http://blogs.msdn.com/b/srivatsn/archive/2005/09/20/471709.aspx
CC-MAIN-2014-41
refinedweb
541
56.96
Introduction If you work with a large datasets in json inside your python code, then you might want to try using 3rd party libraries like ujsonand orjson which are replacements to python’s json library. As per their documentation ujson (UltraJSON) is an ultra fast JSON encoder and decoder written in pure C with bindings for Python 3.7+. orjson is a fast, correct JSON library for Python. It is the fastest python library for json encoding & decoding. It serializes dataclass, datetime, numpy, and UUID instances natively. Benchmarking I did a basic benchmark comparing json, ujson and orjson. The benchmarking results were interesting. import time import json import orjson import ujson def benchmark(name, dumps, loads): start = time.time() for i in range(3000000): result = dumps(m) loads(result) print(name, time.time() - start) if __name__ == " __main__": m = { "timestamp": 1556283673.1523004, "task_uuid": "0ed1a1c3-050c-4fb9-9426-a7e72d0acfc7", "task_level": [1, 2, 1], "action_status": "started", "action_type": "main", "key": "value", "another_key": 123, "and_another": ["a", "b"], } benchmark("Python", json.dumps, json.loads) benchmark("ujson", ujson.dumps, ujson.loads) # orjson only outputs bytes, but often we need unicode: benchmark("orjson", lambda s: str(orjson.dumps(s), "utf-8"), orjson.loads) # OUTPUT: # Python 12.502133846282959 # ujson 4.428200960159302 # orjson 2.3136467933654785 ujson is 3 times faster than the standard json library orjson is over 6 times faster than the standard json library Conclusion For most cases, you would want to go with python’s standard json library which removes dependencies on other libraries. On other hand you could try out ujsonwhich is simple replacement for python’s json library. If you want more speed and also want dataclass, datetime, numpy, and UUID instances and you are ready to deal with more complex code, then you can try your hands on orjson Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dollardhingra/benchmarking-python-json-serializers-json-vs-ujson-vs-orjson-1o16
CC-MAIN-2022-33
refinedweb
297
57.98
XQuery/DOJO data Motivation[edit] You want to use XQuery with your DOJO JavaScript library which uses a variation of JSON syntax. Method[edit] DOJO is a framework for developing rich client side applets in javascript: from the nice to have to the core webapp. Some day you may want to deliver your data in a way, that you or other people can easily use from DOJO. DOJO specifies its own idiosyncratic way of wrapping data in JSON formatted objects, so it can be consumed by lots of its widgets: trees, grids, comboboxes, input fields etc. Below example (note the use of single quotes, which makes this invalid JSON) is taken from its web supplied documentation: { identifier: 'abbr', label: 'name', items: [ { abbr:'ec', name:'Ecuador', capital:'Quito' }, { abbr:'eg', name:'Egypt', capital:'Cairo' } ]} Now, if eg. you want to feed an incremental user input widget from a server side search, xquery (in eXist at least) makes this a piece of cake. Please read below script as an introduction to the concept, very likely it can be optimized. The search itself uses a lucene fulltext index, which returns very quickly. xquery version "1.0"; import module namespace{$querystring}</near> </query> return (: fetch results, dont forget to create an index in collection.xconf :) let $hits := collection($coll)//article[ft:query(., $query)] let $count := count($hits) let $result := <result> <identifier>id</identifier> <label>title</label> <count>{$count}</count> { for $item in $hits return <items> <id>{string($item/@id)}</id> {$item/title} </items> } </result> return json:xml-to-json($result) The xquery extension json:xml-to-json($node as node()) does all the magic. In the result variable the data structure is created in the way DOJO wants it (per default), as shown above. Another thing to note: DOJO expects the identifier to be unique. It is up to you to design your data to satisfy this. Another note: as of today (eXist trunk of early september 2010) numbers in the output are quoted, it is up to you to convert them on the client for optimal processing.
https://en.wikibooks.org/wiki/XQuery/DOJO_data
CC-MAIN-2018-30
refinedweb
344
51.99
Greetings! New to this forum, first time posting. =) I'm currently trying to make a log-in or check-in system, it's meant to be able to allow users to log in, log out, and for there to be logs of the date and time the users did this. Issue being that I have never made anything like it before so it's a nice little project where I try to learn as much as possible as I go along. Now the current issue that I am having is that I'm terrible with all forms of coding which involves sending data from one class to the other, and that is exactly the problem I am having right now. I have a mainframe.java class which contains all of the GUI, basic JFrame with a couple of JButtons, some text and the like. When I press the button "Add Member" currently a JOptionPane.showInputDialog shows up, and I can type in the name of the user. However, this is where I really don't know how to proceed, I have a Member.java class I want to use for keeping track of all of the separate members/users, but I do not know how to send over the data, store it properly, and then call for the respective user whenever I need to refer to it (for example when he/she logs out). This is the code for when a button is pushed, and it checks that Add Member is the right one: Code : public void actionPerformed(ActionEvent e) { int i = 0; int i2 = 1; boolean cancel = false; if(e.getSource() == AMB) { while(cancel == false) { if(members[i] == "") { i2 = i+1; //***This is where I want to send the information to the members class to be stored for overall usage of the new user. //Members.class name = new class(arguments); = JOptionPane.showInputDialog(null, "Adding member:"); This is a failure of an attempt of me trying to at the very least send over the name... if(members[i] == null) { members[i] = ""; //In case no name is written, this makes sure that the array slot stays empty for reuse } cancel = true; } (I skipped the rest because my post was denied when trying to post too much code, so I only posted the most essential bit and if there is anything else in the code that you would like me to post for further clarification feel free to ask.) And this is the Members.java class: Code : package LoginSystem; public class Members { String name; int lastLoggedIn; public Members (String Name, int logIn) { lastLoggedIn = logIn; name = Name; } } Very basic so far since I don't really know how to flesh it out. I'm sorry that I can't post much of what I've tried, but that's mostly because I have no idea of what to try or where to find information of what I could try. I have tried to search around for answers but to no success. =/ Any help, be it tips, pointers or just general advice in how to proceed would be greatly appreciated! Kind regards, Lora.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/32957-multiple-instances-class-printingthethread.html
CC-MAIN-2016-18
refinedweb
519
64.85
22 January 2008 18:44 [Source: ICIS news] By Stephen Burns HOUSTON (ICIS news)--Some key US chemical spot values were heading lower on Tuesday amid the turmoil in global financial markets, but most traders were content to wait and see how the situation would pan out. US financial markets reopened on Tuesday after a three-day long weekend. A few traders were at their desks on Monday, but there was no substantive business done. The uncertainty in the financial markets "makes buyers reluctant to purchase", one polymers trader said. In the aromatics sector, January benzene was assessed around $3.43-3.45/gal (€2.37-2.38/gal) FOB (free on board) HTC (?xml:namespace> Toluene was also softer, according to a Houston-based aromatics broker, but no specific levels were clear in the absence of firm price indications. January toluene was at $2.79-2.81/gal FOB HTC on Tuesday. Downstream, the styrene reaction was muted. "Nothing is really going on in styrene," said a major North American supplier. "It is too early to really look at a reaction in this market, due to the winter doldrums." Styrene activity was not expected to pick up again until late February after the Chinese New Year, which market sources regard as a typical cycle. In the olefins market, refinery-grade propylene (RGP) was heard traded at 56.5 cents/lb ($1,245/tonne) on Tuesday, around steady with the levels seen over the last two weeks. There were no fresh price indications for ethylene, with trading described as slow. US Gulf methanol spot prices were softer on Tuesday, but the slip was seen as a continuation of a downward slide from the record highs of the fourth quarter. A February methanol barge offer was heard at $1.55/gal FOB on Tuesday morning, compared with Friday’s January bid/offer assessment of $1.65-1.75/gal FOB. A Energy values were also easing on Tuesday morning. Around 11:30 Houston time (17:30 GMT), NYMEX crude oil futures were down $1.20 at $89.37/bbl, while reformulated blendstock for oxygenate blending (RBOB) gasoline futures were down 3.08 cents at $227.26 cents/gal. Natural gas futures were down 29 cents at $7.70/m Btu. (Additional reporting by Brian Balboa, William Lemos and David Bar
http://www.icis.com/Articles/2008/01/22/9094907/some-us-spot-chemicals-slip-as-buyers-stand-pat.html
CC-MAIN-2013-48
refinedweb
389
66.33
Sorting listview ListView (and the default GridView view) is a great bare bones control for us to add extra functionality – though sorting out of the box would have been handy. On the face of it, sorting is easy – you handle the click of the header and call Sort on the collection being bound to using the property of the header just clicked. What could be hard about that? Welllll, there are two problems. Firstly, you would like to make the sorting process generic rather than either repeating code for each header for each grid Secondly, you need to know which property to actually sort on and which direction. The reason that you don’t always know the property is due to the fact that the property may be in some arbitary point inside a template defined for the column. Attached properties are the answer to both these problems, as well as a custom GridViewColumn child class. For problem one, define an attached property to add sorting functionality and then add to the listview definition , like so: <ListView Name="lvItems" gu:IsGridSortable="True" Then to solve problem two, use the derived GridViewColumn class like so: <gu:SortableGridViewColumn The SortProperty property enables you to specify the property that needs to be sorted very easily Where gu is a xmlns definition to point to your namespace where your new attached property class lives. Luckily , for you and me, someone has kindly created just such classes, just check out Joel Rummerman’s blog: You will see on his blog that if you define two styles with keys HeaderTemplateArrowDown and HeaderTemplateArrowUp then that style will be shown in the header as well. So if they are triangles, the header will show them, which is great. For the listview sorting itself, take a look at the attached property implementation by Mike Brown, at his blog:  Comment on this Post
http://itknowledgeexchange.techtarget.com/wpf/sorting-listview/
CC-MAIN-2017-09
refinedweb
313
60.08
How to supercharge Swift enum-based states with Sourcery I really love enums with associated values in Swift and it’s my main tool for designing state. Why are enums so much more powerful and beautiful option? Mostly because they allow me to keep strong invariants about the data in my type system. For example, if I have a screen which will display some data loaded from the server, I can represent it with next model: struct MyScreenModel { let isLoaded: Bool let isEmpty: Bool let data: SomeInfo? let error: Error? } Then I can write unit tests to check correctness of this model and add documentation with expected combination flags and optionals. On the other hand, I can represent it with an enum: enum MyScreenModel { case empty case loading case data(SomeInfo) case failed(Error) } Now my type reflects all invariants that I have on my data. This means that compiler will be able to check it and I can remove unit tests responsible for verification of these invariants. Pros are clear and loud, what about cons? What will we pay for this kind of guarantees? Let’s compare how easy it is to work with each of these types: func viewWillLayoutSubviews() { self.loadingIndicator.isHidden = self.model.isLoading == false ... } compared to our previous version: func viewWillLayoutSubviews() { self.loadingIndicator.isHidden = { guard case .loading = self.model else { return true } return false }() ... } Obviously, something bad is happening. We lose ability to refer to specific parts of our model, and now we are forced to double check what is exactly stored in this model. Editing makes the situation even worse. Let’s declare SomeInfo struct as mutable and look how it will affect our code: struct SomeInfo { var name: String } In first, naive, unsafe way: self.model.info?.name = "New name" And in safe, enum way: self.model = { guard var case .info(info) = self.model else { return self.model } info.name = "New name" return .info(info) }() Obviously, this is wrong. Can we achieve usability of struct-based approach and safety of enum-based approach? Prism to the rescue What is prism? Generally speaking, prism is an object capable of decomposing some beam into details, making details visible and obvious. What is prism in terms of our problem domain? If our enum version of data structure is a solid beam, then our struct is a “spectre”. So prism is a way to turn enum into struct. There are several ways to achieve it, and I will showcase the extension based approach: extension MyScreenModel { var isLoading: Bool { guard case .loading = self else { return false } return true } var isEmpty: Bool { guard case .empty = self else { return false } return true } var data: SomeInfo? { guard let case .data(someInfo) = self else { return nil } return someInfo } var error: Error? { guard let case .error(error) = self else { return nil } return error } } And now we can go back to short and concise syntax of struct based variant and have compiler guarantees from enum based option. Win-Win! The only downside: writing these extensions is boring. Adding some Sourcery Sourcery is the standalone tool that allows you to wire up some information about your code with template language called Stencil. In other words, it is cool code generation driven approach by your own code. If you are not familiar with the syntax of Stencil, don’t worry. I also referred to my intuition and google rather than some knowledge and understanding. So what do I want to get? I want to have an extension for all enum. Let’s make it: {% for enum in types.enums where enum.cases.all.count > 0 and not enum.accessLevel == 'private' %} ... {% endfor %} This code is so nice that I don’t need to write any comments. It is basically self-explainable. And what do we want for each enum? Extension! {{ enum.accessLevel }} extension {{ enum.name }} { ... } What’s next? For each case we need to generate a property. {% for case in enum.cases %} ... {% endfor %} What do we want to generate? If our case contains some associated value we want optional accessor, otherwise — boolean. {% if case.hasAssociatedValue %} {% call associatedValueVar case %} {% else %} {% call simpleVar case %} {% endif %} We found some cool function like calling syntax! Let’s look on simple case implementation: {% macro simpleVar case %} public var is{{ case.name|upperFirst }}: Bool { get { guard case .{{ case.name }} = self else { return false } return true } set { guard newValue else { fatalError("Setting false value forbidden") } self = .{{ case.name }} } } {% endmacro %} For getter we just check self and return true or false. Setter allow us to switch enum like this: var model = MyScreenModel.empty model.isLoading = true // is the same as model = .loading Setting false doesn’t have any sense, so I must add runtime check to catch it. What about cases with associated types? {% macro associatedValueVar case %} public var {{case.name}}: {% call caseType case %} { get { guard case let .{{ case.name }}({{ case.name }}) = self else { return nil } return {{case.name}} } set { guard let newValue = newValue else { fatalError("Setting nil value forbidden") } self = .{{ case.name }}({% call caseSet case %}) } } {% endmacro %} Pretty simple, I would say. It is tricky at first, but 3 min look into it and it becomes really simple. Note that caseType and caseSet macro are out of scope here, but they mostly are here for normalizing access to type names. The full text of template you can easily find on public production code base. Connecting Sorcery to Xcode So we have our template. How to actually generate code with it? There are plenty of ways to do it in Sourcery README file, but I want to show you the way that we found for ourself. Step 1. Add Xcode build rule to support *.stencilfiles. Go to Project -> Build rules -> “+” And make it look like this: set -e if ! which sourcery > /dev/null; then echo "error: Sourcery is missing. Make brew install sourcery." exit 1 fi templates=$1 output=$2 sourcery --sources sources --templates "${templates}" --output "${output}" Where sources/stencil.sh is a simple wrapper around sourcery tool itself. As a result of this rule, every stencil file will be processed to some swift file, which later will be compiled. The generated file will be stored into derived sources dir and never will appear in git or code review. It will be updated in every build and always will be relevant. In other words — it just works. Step 2. Move your stencil templates to compile phase Step 3. Build your project. Sometimes something inside Xcode dies, and this setup will not bring you updated version. In case of such problems — just clean derived data, as usual. I hope my brief guide will be helpful for some of you :) If you want to learn more and look at real project that use this technique, you can examine our production code hosted here. Thanks for reading.
https://medium.com/flawless-app-stories/enums-and-sourcery-5da57cda473b?utm_campaign=Revue%20newsletter&utm_medium=Swift%20Weekly%20Newsletter%20Issue%20103&utm_source=Swift%20Weekly
CC-MAIN-2018-26
refinedweb
1,127
67.96
This is the mail archive of the gdb@sourceware.org mailing list for the GDB project. We have a large source tree with many directories. When the system is built that tree appears in one place in the namespace; then the build results are saved in "good builds" directories, one per good build up to whatever we can save. The result is that source files are not where they were at build time. GDB can handle this on a per-directory basis with the "directory" command, but when you have on the order of a hundred directories that is excessively painful. I made a local patch to add a source path name rewriting rule. That allows a substring of the source path name to be replaced by some different substring. The current implementation is simplistic -- it allows exactly one substitution rule, and the matching is exact string match. It would be possible to allow multiple rules, and probably also fancier mechanisms like regexps. That wasn't necessary for our application. Is this of interest to the greater GDB? paul
http://sourceware.org/ml/gdb/2006-03/msg00189.html
crawl-002
refinedweb
179
65.12
On Thu, Jun 19, 2008 at 12:05 AM, Brad Schick <schickb@gmail.com> wrote: > >. > An argument for #1 is that in uses cases like mine (millions of docs) the presence of "extra" views can be a real headache. Users can just provide the views necessary for the plugins to function. I like #1 also because it is "generative" as they say. By that I mean people can come up for new and interesting ways to use the underlying feature (in this case the ability to query couchdb from your plugin) that we can never anticipate. > Another thought: If plugins want to maintain state that depends on couchdb > documents, won't they need a notification facility? Filtering these > notifications would be another good use of map (and perhaps reduce) > functions associated directly with plugins. > There's currently the DbUpdateNotificationProcess, which I suppose is another good candidate to be treated as a plugin. I wish I had a deeper understanding of the Erlang internals (where it would be convenient to draw the line between plugin and the code that hosts it). I guess there is some common server interface we could extract, and allow plugins to register for inclusion in. And then there are private paths, both in the document-id namespace (eg _design) and in the query-time url namespace (eg _view). Some plugins will want to be passed all the documents in the database (like views do, and the runners I describe in the wiki) and some plugins will want to operate at query time (like search). Brad makes the suggestion that some plugins may want to operate on the contents of a map or reduce view. Hopefully we can find a few more edge cases and out-there ideas so we have a full picture before we try to cut it down to just the essentials and start writing code. -- Chris Anderson
https://mail-archives.apache.org/mod_mbox/couchdb-user/200806.mbox/%3Ce282921e0806190119i7fb2b271n43d931691914a94a@mail.gmail.com%3E
CC-MAIN-2017-04
refinedweb
314
57.71
Today Widget - Just some comments Sorry if this has been covered before. But I tried to use the Today Widget, today (I have tried before, but long time ago). Anyway, I was suprised about what I could do. I was able to load a CustomView using @JonB PYUILoader (loading a pyui file into a Custom Class) , I was able to use the update method in the custom class to update a timer value. Interact with a database (a single call for now). Anyway, it all just works. Well execpt for the bg_color. The way I am doing it, I end up with a white bg_color. I tried setting the bg_color = None, that does not work. I will have to find out how to do that. but the below code is not runable on anothers device, I just wanted to start something and see if things like update method would work. I am not sure if different devices get more memory to run today widgets or not. So the below code is not meant to be smart/polished. Just wanted to see if it would work. #!python3 import appex import ui import arrow from tinydb import TinyDB, where) def GetTimerFinished(db_fn, timer_name): with TinyDB(db_fn) as db: rec = db.search(where('timer_name') == timer_name) return rec[0] def SetTimerFinished(db_fn, timer_name, days=0, hours=0, minutes=0): utc = arrow.utcnow() finish_time = utc.shift(days=+days, hours=+hours, minutes=+minutes) rec = dict(timer_name=timer_name, entered=utc.for_json(), finish_time=finish_time.for_json(), elapased=False, ) with TinyDB(db_fn) as db: db.upsert(rec, where('timer_name') == rec['timer_name']) return rec class AppexTestClass(PYUILoader): def __init__(self, pyui_fn, db_fn, timer_name, *args, **kwargs): super().__init__(pyui_fn, *args, **kwargs) #self.bg_color = None self.db_fn = db_fn self.timer_name = timer_name self.local_finish_time = 0 rec = GetTimerFinished(db_fn, self.timer_name) self.update_interval = 1 self.count = 0 self.expired = False self.calc_time() self['f_timer_name'].text = rec['timer_name'] def calc_time(self): timer_rec = GetTimerFinished(self.db_fn, self.timer_name) self.timer_name = timer_rec['timer_name'] utc = arrow.get(timer_rec['finish_time']) local_finish_time = utc.to('Asia/Bangkok') self.local_finish_time = local_finish_time if self.local_finish_time < arrow.now(): self.expired = True def time_remaining(self): return self.local_finish_time - arrow.now() def update(self): if self.expired: self['f_time_left'].text = 'Expired' return self['f_time_left'].text =\ self.format_timedelta(self.local_finish_time - arrow.now()) def format_timedelta(self, td): hours, remainder = divmod(td.total_seconds(), 3600) minutes, seconds = divmod(remainder, 60) hours, minutes, seconds = int(hours), int(minutes), int(seconds) if hours < 10: hours = '0%s' % int(hours) if minutes < 10: minutes = '0%s' % minutes if seconds < 10: seconds = '0%s' % seconds return '%s:%s:%s' % (hours, minutes, seconds) def main(): db_fn = 'my_timers.json' pyui_fn = 'appex_test_ui.pyui' timer_name = 'Wonder Wars' #SetTimerFinished(db_fn, timer_name, hours=5, minutes=35 ) v = AppexTestClass(pyui_fn, db_fn, timer_name) appex.set_widget_view(v) if __name__ == '__main__': main() def format_timedelta(self, td): hours, remainder = divmod(td.total_seconds(), 3600) minutes, seconds = divmod(remainder, 60) return '%02.0f:%02.0f:%02.0f' % (hours, minutes, seconds) @ccc , thanks. Actually I just copied that code from Stackflow. I hadn't realised before that there is no built formatting for TimeDelta objects. The method posted is also good enough as its limited to by hours. I got another code snippet off ActiveCode that handles Weeks all the way down to microsecond's if you need it. The code is here. I am still looking at the arrow docs to see if it has support to format TimeDelta objects. As far as I see it doesn't. When you do a calculation operation in arrow, you are returned a TimeDelta object, not an arrow object. Just for fun, I will look at Maya to see if it supports it. Would not make me swap if it did though because arrow ships with Pythonista. Oh, arrow has the humanise method, not the same but still handy. I went through the code above more closely and see I had quite a few mistakes I fixed up. Mainly the way I built it up. Wasn't sure what would work. I am suprised it works. Using the PYUILoader and update functions etc... But I guess its a nice testament to how well things are put together in Pythonista. The other thing that I was able to do was add a table to the view. I didn't try interacting with it. Just added a TableView on in the Designer. I think this maybe getting to the edge though, as I often got 'Unable to Load' message in the Today view. I didn't realise at first, but that message is a button. If you tap it the view loads. Screen Shot of TableView in Today widget. Maybe this is no news for a lot of people. I am not sure I have seen a scrolling table in a Today Widget before (I mean from an company). - , thanks for the reply. However, I am not sure "clear" is actually recognised as a legitimate param (if its a css name then it would be), but calling ui.set_color(x) if x is not recognised it will set the bg_color to (0.0, 0.0, 0.0, 0.0) regardless - tuple of RGBA. Eg. You could call ui.set_color(None) or ui.set_color("Hey_Jude") and it will still return (0.0, 0.0, 0.0, 0.0). Again, maybe "clear" is recognised, but maybe its not, and the default behaviour for an un-recognised param is being returned -.
https://forum.omz-software.com/topic/4628/today-widget-just-some-comments
CC-MAIN-2021-39
refinedweb
895
69.68
Technical Support On-Line Manuals CARM User's Guide Discontinued #include <stdio.h> int vsprintf ( char *buffer, /* pointer to storage buffer */ const char *fmtstr, /* pointer to format string */ char *argptr); /* pointer to argument list */ The vsprintf function formats a series of strings and numeric values and stores the resulting string in buffer. This function is similar to the sprintf. The vsprintf function returns the number of characters actually written to buffer. gets, puts, sscanf, vprintf #include <stdio.h> #include <stdarg.h> char etxt[30]; /* text buffer */ void error (char *fmt, ...) { va_list arg_ptr; va_start (arg_ptr, fmt); /* format string */ vsprintf (etxt, fmt, arg_ptr); va_end (arg_ptr); } void tst_vsprintf (void) { int i; i = 1000; /* call error with one parameter */ error ("Error: '%d' number too large\n", i); /* call error with just a format string */ error ("Syntax.
http://www.keil.com/support/man/docs/ca/ca_vsprintf.htm
CC-MAIN-2019-30
refinedweb
132
64.1
I have this question to deal with..... Here is the question: Create a class called Date that includes three pieces of information as instance variables—a month (type int), a day (type int) and a year (type int). Your class should have a constructor that initializes the three instance variables and assumes that the values provided are correct. Provide a set and a get method for each instance variable. Provide a method displayDate that displays the month, day and year separated by forward slashes (/). Write a test application named DateTest that demonstrates class Date’s capabilities. Here are my two classes as I wrote them our: Code Java: /* * To change this template, choose Tools | Templates * and open the template in the editor. */ /** * * @Elegbede M.D * 27-08-2010 */ public class Date { private int year; private int month; private int day; public Date(int yr, int mnt, int dy) { year = yr; month = mnt; day = dy; } public void setYear(int yr){ year = yr; } public void setMonth(int mnt){ month = mnt; } public void setDay(int dy){ day = dy; } public int getYear(){ return year; } public int getMonth(){ return month; } public int getDay(){ return day; } public int displayDate(){ System.out.println("Today's date is" +month+ "/" +day+ "/" +year+"."); return displayDate(); } } The Second Class, i.e the test class: Code Java: It keeps repeating "Today's date is27/2010/8" and then comes with an error. I am sure there is an error but where it is, I don't know. Please help so that I may move to the next chapter.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/5112-what-wrong-here-please-printingthethread.html
CC-MAIN-2015-27
refinedweb
256
69.82
I use a lot of images in my project (about 100-120, 10-30 kB each). Images are on the server and loading (source = "mediayes.png") and display on the window by clicking on the button. There is a delay due to the large number of images (7-10 s). I created a class to connect images to the project: package { import spark.primitives.BitmapImage; public class embedpng { [Embed(source="media/yes.png")] [Bindable] public var image1:BitmapImage; [Embed(source="media/no.png")] [Bindable] public var image2:BitmapImage; ........................................ ........................................ [Embed(source="media/ya.png")] [Bindable] public var image110:BitmapImage; } } Calling a class in the main MXML: import embedpng; However, this does not remove the delay and size of the swf is not increased compared to the same. How do I solve the problem with fast loading images? Thank you. I'm not quiet sure if I understand this one right ? 1) A lot of Images ? What has that to do with anything ??? B) 10 to 30 kb each ! ??? C) (There is a delay due to the large number of Images) - I don't think so - If there is a delay of 7 to 10 seconds than there is for sure something else very wrong ! And why embedding Images except Icons etc. is beyond me, Sorry.Maybe a couple for starters but for sure not the lot others could load while say reading the page etc. I have myself many galleries of Images on a server which are normaly 120 to 180 kb going up to maybe 240 kb each, and yes I do have a slide delay at the first showing but surely not after that at all. regards aktell Yes, you're right, I do not explain everything. There are several XML with templates. In each of the path to the images (source="media/yes.png"). A delay of 7-10 seconds only the first time, then 1-2 seconds.
https://forums.adobe.com/thread/769185
CC-MAIN-2017-30
refinedweb
320
75.61
In this article, you'll learn how to build apps using Firevel - a serverless Laravel framework. In recent decades the PHP community has gone through some major changes. First, we saw billion-dollar platforms like Facebook adopting PHP as a primary language. Then tools like the Laravel framework quickly rose in popularity as it enabled the community to quickly build modern applications. Finally, PHP 7 gave the language a bright future with performance improvements and features like type hinting. Today, developing applications in PHP has never been simpler, but taking advantage of modern serverless services remained exclusive to languages like JavaScript or Python. Things are about to change yet again for the PHP community as offerings like Google App Engine and Firestore make running completely serverless PHP apps possible. Serverless apps are usually simple to set up, cheap (or free) with low traffic, and can scale up to handle high loads extremely quickly. Another benefit is that you also don’t need to worry about server administration as these new services provide fully managed infrastructure. The main reason I started to work on a serverless PHP solution was because of the time I was wasting bootstrapping every new microservice. In addition, once the service was set up it would continue to cost money from machines idling (for example staging, or services used only during office hours). Database After analyzing a wide spectrum of database solutions from different providers and running some benchmark tests I found that Google BigQuery is the best serverless solution for data warehousing and Firestore is the best NoSQL database for production queries. In both cases, the key benefits are simplicity and performance. Backend PHP 7’s release in Google App Engine Standard Environment was the first puzzle piece that allowed the serverless project to start. Google App Engine scales down to zero and it’s eligible for GCP’s free tier. Another bonus, it’s integrated with Google Firestore so you don’t need to set up credentials to access your database. Firevel I wanted to make sure that I could use Laravel in this project so I developed several packages (together named Firevel) that makes the framework Google App Engine friendly. Some examples of these changes are disabling writes outside of the tmp directory, allowing a Google proxy, and setting up a Stackdriver log channel. The packages included with Firevel are: By default, all exceptions appear in your App Engine Application Errors, and session data is stored inside a collection in Firestore called ‘sessions’. If you use the Firestore Cache driver you can use all the Laravel Cache features you are used to. The cache is stored in a Firestore collection named cache. >>> Cache::set('foo', 'bar'); => true>>> Cache::get('foo'); => "bar" Firestore Client can be used for direct Firestore calls and is accessible by Firestore facade. Authentication is handled by App Engine behind the scenes. Firestore::collection('cities') ->document('LA') ->set(['name' => 'Los Angeles', 'state' => 'CA']); Eloquent on FirestoreIf you ever worked with Laravel before you can probably agree that code is very extension friendly. There are plenty of ways to build custom drivers, extensions, plugins etc. However, I don’t think I can say the same about the heart of Laravel — the Eloquent ORM. Eloquent is one of the main reasons Laravel is so successful. It’s a beautiful and simple ActiveRecord implementation designed to work with SQL databases, but it wasn’t designed with NoSQL in mind. So I built Firequent which is the Eloquent replacement for Firevel and it’s currently in beta. I managed to reach general functionality and support for methods like find(), create(), make(), where() and limit(), but it’s not yet ready to work with relationships or any advanced queries. You can create a Firequent model simply by extending Firevel\Firequent\Model. In the Firestore NoSQL world, you don't need database schemas so instead of creating migrations, you can use mass assignments. <?phpnamespace App;use Firevel\Firequent\Model;class Post extends Model { /** * The attributes that are mass assignable. * * @var array */ public $fillable = ['name', 'description'];} You also assign attributes on the fly directly inside the model: $post = Post::make(['name' => 'Foo', 'description' => 'Bar']); $post->public = true; $post->save(); Data is available instantly in your Firestore Dashboard. The coolest part of Firevel is the simple setup. If you have the gcloud CLI installed on your machine, you can set up an entire app with composer create-project firevel/firevel and then deploy it with a gcloud app deploy command. Using this you can build a scalable web app in mere minutes. It’s also a great tool for building microservices with integrated Firebase Auth and the simple deployment lets you build a new microservice without all the painful infrastructure setup.command. Using this you can build a scalable web app in mere minutes. It’s also a great tool for building microservices with integrated Firebase Auth and the simple deployment lets you build a new microservice without all the painful infrastructure setup. If your code is limited to a simple CRUD application and your index queries are just filtering by values you might build your app and not even notice any difference from the standard Laravel experience. But with more advanced use cases you can’t simply replace MySQL with NoSQL unless you update your approach to code structure. Firestore offers us great scalability, flexibility, and simplicity, but everything comes with a cost so keep in mind that: Firevel is a 100% serverless framework. It currently has limitations and requires more production case studies before it becomes mature. But I’ve found it to be a joy to work with and can save you a lot of time. To me — the possibility of running apps at scale, entirely in PHP, without expensive to manage servers, is like finding a unicorn. Thanks for reading ❤ If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter Laravel PHP Framework Tutorial - Full Course for Beginners (2019)...
https://morioh.com/p/fd192a497274
CC-MAIN-2020-16
refinedweb
1,002
51.07
(For more resources related to this topic, see here.) Running the IPython console If IPython has been installed correctly, you should be able to run it from a system shell with the ipython command. You can use this prompt like a regular Python interpreter as shown in the following screenshot: Command-line shell on Windows If you are on Windows and using the old cmd.exe shell, you should be aware that this tool is extremely limited. You could instead use a more powerful interpreter, such as Microsoft PowerShell, which is integrated by default in Windows 7 and 8. The simple fact that most common filesystem-related commands (namely, pwd, cd, ls, cp, ps, and so on) have the same name as in Unix should be a sufficient reason to switch. Of course, IPython offers much more than that. For example, IPython ships with tens of little commands that considerably improve productivity. Some of these commands help you get information about any Python function or object. For instance, have you ever had a doubt about how to use the super function to access parent methods in a derived class? Just type super? (a shortcut for the command %pinfo super) and you will find all the information regarding the super function. Appending ? or ?? to any command or variable gives you all the information you need about it, as shown here: In [1]: super? Typical use to call a cooperative superclass method: class C(B): def meth(self, arg): super(C, self).meth(arg) Using IPython as a system shell You can use the IPython command-line interface as an extended system shell. You can navigate throughout your filesystem and execute any system command. For instance, the standard Unix commands pwd, ls, and cd are available in IPython and work on Windows too, as shown in the following example: In [1]: pwd Out[1]: u'C:' In [2]: cd windows C:windows These commands are particular magic commands that are central in the IPython shell. There are dozens of magic commands and we will use a lot of them throughout this book. You can get a list of all magic commands with the %lsmagic command. Using the IPython magic commands Magic commands actually come with a % prefix, but the automagic system, enabled by default, allows you to conveniently omit this prefix. Using the prefix is always possible, particularly when the unprefixed command is shadowed by a Python variable with the same name. The %automagic command toggles the automagic system. In this book, we will generally use the % prefix to refer to magic commands, but keep in mind that you can omit it most of the time, if you prefer. Using the history Like the standard Python console, IPython offers a command history. However, unlike in Python’s console, the IPython history spans your previous interactive sessions. In addition to this, several key strokes and commands allow you to reduce repetitive typing. In an IPython console prompt, use the up and down arrow keys to go through your whole input history. If you start typing before pressing the arrow keys, only the commands that match what you have typed so far will be shown. In any interactive session, your input and output history is kept in the In and Out variables and is indexed by a prompt number. The _, __, ___ and _i, _ii, _iii variables contain the last three output and input objects, respectively. The _n and _in variables return the nth output and input history. For instance, let’s type the following command: In [4]: a = 12 In [5]: a ** 2 Out[5]: 144 In [6]: print("The result is {0:d}.".format(_)) The result is 144. In this example, we display the output, that is, 144 of prompt 5 on line 6. Tab completion Tab completion is incredibly useful and you will find yourself using it all the time. Whenever you start typing any command, variable name, or function, press the Tab key to let IPython either automatically complete what you are typing if there is no ambiguity, or show you the list of possible commands or names that match what you have typed so far. It also works for directories and file paths, just like in the system shell. It is also particularly useful for dynamic object introspection. Type any Python object name followed by a point and then press the Tab key; IPython will show you the list of existing attributes and methods, as shown in the following example: In [1]: import os In [2]: os.path.split<tab> os.path.split os.path.splitdrive os.path.splitext os.path.splitunc In the second line, as shown in the previous code, we press the Tab key after having typed os.path.split. IPython then displays all the possible commands. Tab Completion and Private Variables Tab completion shows you all the attributes and methods of an object, except those that begin with an underscore (_). The reason is that it is a standard convention in Python programming to prefix private variables with an underscore. To force IPython to show all private attributes and methods, type myobject._ before pressing the Tab key. Nothing is really private or hidden in Python. It is part of a general Python philosophy, as expressed by the famous saying, “We are all consenting adults here.” Executing a script with the %run command Although essential, the interactive console becomes limited when running sequences of multiple commands. Writing multiple commands in a Python script with the .py file extension (by convention) is quite common. A Python script can be executed from within the IPython console with the %run magic command followed by the script filename. The script is executed in a fresh, new Python namespace unless the -i option has been used, in which case the current interactive Python namespace is used for the execution. In all cases, all variables defined in the script become available in the console at the end of script execution. Let’s write the following Python script in a file called script.py: print("Running script.") x = 12 print("'x' is now equal to {0:d}.".format(x)) Now, assuming we are in the directory where this file is located, we can execute it in IPython by entering the following command: In [1]: %run script.py Running script. 'x' is now equal to 12. In [2]: x Out[2]: 12 When running the script, the standard output of the console displays any print statement. At the end of execution, the x variable defined in the script is then included in the interactive namespace, which is quite convenient. Quick benchmarking with the %timeit command You can do quick benchmarks in an interactive session with the %timeit magic command. It lets you estimate how much time the execution of a single command takes. The same command is executed multiple times within a loop, and this loop itself is repeated several times by default. The individual execution time of the command is then automatically estimated with an average. The -n option controls the number of executions in a loop, whereas the -r option controls the number of executed loops. For example, let’s type the following command: In[1]: %timeit [x*x for x in range(100000)] 10 loops, best of 3: 26.1 ms per loop Here, it took about 26 milliseconds to compute the squares of all integers up to 100000. Quick debugging with the %debug command IPython ships with a powerful command-line debugger. Whenever an exception is raised in the console, use the %debug magic command to launch the debugger at the exception point. You then have access to all the local variables and to the full stack traceback in postmortem mode. Navigate up and down through the stack with the u and d commands and exit the debugger with the q command. See the list of all the available commands in the debugger by entering the ? command. You can use the %pdb magic command to activate the automatic execution of the IPython debugger as soon as an exception is raised. Interactive computing with Pylab The %pylab magic command enables the scientific computing capabilities of the NumPy and matplotlib packages, namely efficient operations on vectors and matrices and plotting and interactive visualization features. It becomes possible to perform interactive computations in the console and plot graphs dynamically. For example, let’s enter the following command: In [1]: %pylab Welcome to pylab, a matplotlib-based Python environment [backend: TkAgg]. For more information, type 'help(pylab)'. In [2]: x = linspace(-10., 10., 1000) In [3]: plot(x, sin(x)) In this example, we first define a vector of 1000 values linearly spaced between -10 and 10. Then we plot the graph (x, sin(x)). A window with a plot appears as shown in the following screenshot, and the console is not blocked while this window is opened. This allows us to interactively modify the plot while it is open. Using the IPython Notebook The Notebook brings the functionality of IPython into the browser for multiline textediting features, interactive session reproducibility, and so on. It is a modern and powerful way of using Python in an interactive and reproducible way To use the Notebook, call the ipython notebook command in a shell (make sure you have installed the required dependencies). This will launch a local web server on the default port 8888. Go to in a browser and create a new Notebook. You can write one or several lines of code in the input cells. Here are some of the most useful keyboard shortcuts: Press the Enter key to create a new line in the cell and not execute the cell Press Shift + Enter to execute the cell and go to the next cell Press Alt + Enter to execute the cell and append a new empty cell right after it Press Ctrl + Enter for quick instant experiments when you do not want to save the output Press Ctrl + M and then the H key to display the list of all the keyboard shortcuts Customizing IPython You can save your user preferences in a Python file; this file is called an IPython profile. To create a default profile, type ipython profile create in a shell. This will create a folder named profile_default in the ~/.ipython or ~/.config/ ipython directory. The file ipython_config.py in this folder contains preferences about IPython. You can create different profiles with different names using ipython profile create profilename, and then launch IPython with ipython –profile=profilename to use that profile. The ~ directory is your home directory, for example, something like /home/ yourname on Unix, or C:Usersyourname or C:Documents and Settings yourname on Windows. Summary We have gone through 10 of the most interesting features offered by IPython in this article. They essentially concern the Python and shell interactive features, including the integrated debugger and profiler, and the interactive computing and visualization features brought by the NumPy and Matplotlib packages. Resources for Article : Further resources on this subject: - Advanced Matplotlib: Part 1 [Article] - Python Testing: Installing the Robot Framework [Article] - Running a simple game using Pygame [Article]
https://hub.packtpub.com/ten-ipython-essentials/
CC-MAIN-2018-22
refinedweb
1,861
61.46
.5 Appendix C: Git Commands - Sharing and Updating Projects Sharing and Updating Projects There are not very many commands in Git that access the network, nearly all of the commands operate on the local database. When you are ready to share your work or pull changes from elsewhere, there are a handful of commands that deal with remote repositories. git fetch The git fetch command communicates with a remote repository and fetches down all the information that is in that repository that is not in your current one and stores it in your local database. We first look at this command in Fetching and Pulling from Your Remotes and we continue to see examples of it use in Remote Branches. We also use it in several of the examples in Contributing to a Project. We use it to fetch a single specific reference that is outside of the default space in Pull Request Refs and we see how to fetch from a bundle in Bundling. We set up highly custom refspecs in order to make git fetch do something a little different than the default in [_getting_notes] and The Refspec. git pull The git pull command is basically a combination of the git fetch and git merge commands, where Git will fetch from the remote you specify and then immediately try to merge it into the branch you’re on. We introduce it quicking in Fetching and Pulling from Your Remotes and show how to see what it will merge if you run it in Inspecting a Remote. We also see how to use it to help with rebasing difficulties in Rebase When You Rebase. We show how to use it with a URL to pull in changes in a one-off fashion in Checking Out Remote Branches. Finally, we very quickly mention that you can use the --verify-signatures option to it in order to verify that commits you are pulling have been GPG signed in Signing Commits. git push The git push command is used to communicate with another repository, calculate what your local database has that the remote one does not, and then pushes the difference into the other repository. It requires write access to the other repository and so normally is authenticated somehow. We first look at the git push command in Pushing to Your Remotes. Here we cover the basics of pushing a branch to a remote repository. In Pushing we go a little deeper into pushing specific branches and in Tracking Branches we see how to set up tracking branches to automatically push to. In Deleting Remote Branches we use the --delete flag to delete a branch on the server with git push. Throughout Contributing to a Project we see several examples of using git push to share work on branches through multiple remotes. We see how to use it to share tags that you have made with the --tags option in Sharing Tags. In [_sharing_notes] we use it in a slightly less common way to share references for commit notes — references that sit outside of the normal refs namespace. In Publishing Submodule Changes we use the --recurse-submodules option to check that all of our submodules work has been published before pushing the superproject, which can be really helpful when using submodules. In Other Client Hooks we talk briefly about the pre-push hook, which is a script we can setup to run before a push completes to verify that it should be allowed to push. Finally, in Pushing Refspecs we look at pushing with a full refspec instead of the general shortcuts that are normally used. This can help you be very specific about what work you wish to share. git remote The git remote command is a management tool for your record of remote repositories. It allows you to save long URLs as short handles, such as “origin” so you don’t have to type them out all the time. You can have several of these and the git remote command is used to add, change and delete them. This command is covered in detail in Working with Remotes, including listing, adding, removing and renaming them. It is used in nearly every subsequent chapter in the book too, but always in the standard git remote add <name> <url> format. git archive The git archive command is used to create an archive file of a specific snapshot of the project. We use git archive to create a tarball of a project for sharing in Preparing a Release. git submodule The git submodule command is used to manage external repositories within a normal repositories. This could be for libraries or other types of shared resources. The submodule command has several sub-commands ( add, update, sync, etc) for managing these resources. This command is only mentioned and entirely covered in Submodules.
https://git-scm.com/book/be/v2/Appendix-C%3A-Git-Commands-Sharing-and-Updating-Projects
CC-MAIN-2019-13
refinedweb
807
65.86
The REST zealotry needs to end. First, let me establish my bonafides here. I work on the ROME project. I built the first module for Google Base. I have used the Propono project to build an APP service. I use REST. REST is a friend of mine. Here is the thing: People need to own up to the fact that SOAP has its place. Yes, “SOAP” is neither “Simple,” nor arguably about “Object Access,” and marginally a “Protocol.” But SOAP and WS-* have their place. Just own up to it. One of the things I had a chat about at JavaOne last year, over a number of free drinks, is that one of the big advantages of SOAP over REST was the use of nillable. There is a complete semantic difference between So here is my point, REST people… Yeah, SOAP isn’t simple, but the complexity of the envelope tag gives you a clear point to encrypt or pass transaction information. The soap:nillable namespace has to be there, but it doesn’t require a protocolchange. Can’t we just admit that, on a 60/40 basis REST rules? If you want simple and easy and your data conforms to basic preserved ENTITIES, REST is great. When you need encryption beyond basic SSL, differential updates, two phase commits, reliable messaging — all that “other stuff” — WS-* has a place? I am getting really tired of religious arguments in my space lately. Sure, dynamically typed languages have a place, but static typing has some real advantages. Yes, environment X gives you A, but environment Y gives you B. I am not going to tell anyone that WS-* is the end all be all, but I wish the REST crowd would own up to the idea that WS-* solves some hard problems, and with basic tooling is not “hard to use.”
http://www.oreillynet.com/onjava/blog/2007/10/
crawl-002
refinedweb
309
82.14
I recently bought an Arduino pH sensor kit for measuring pH value of my hydroponic setup, it cheap but has very little information/document on how to use it, so I decided to figure it out myself on how it works and how to use it. Popular pH measurement kits for Arduino If you search for pH sensor with Arduino on Internet, you are likely see 3 major commercially available or mass-produced solutions: Atlas Scientific offers high quality and well-designed sensor kit for pH measurement. It Gravity analog pH Kit consists of a consumer-grade pH sensor and interface board, plus 3 package of calibration buffer solutions will cost $65.00. Atlas Scientific hardware is high quality but doesn’t seems to be open-sourced. DFRobot also has a solution with the same name Gravity (why?) as Atlas Scientific. its version 1 Gravity: Analog pH Sensor Kit consists of pH probe plus the sensor board and is priced at $29.50. There is a version 2 of Gravity: Analog pH Sensor Kit which comes with the board with enhanced design at $39.50 by including buffer solutions and mounting screws for the board. DFRobot published its schematic, PCB layout and Arduino code for version 1 on its website and github under GPL2 license. But it only publish the PCB layout for version 2 without schematic, so I don’t know what exactly was enhanced in the design for the version 2. The third commonly available pH sensor kit for Arduino that you see almost in every e-commerce marketplaces such as Taobao, AliExpress and Amazon is this “mystery” pH sensor kit that I bought. You can find it at as low as $17.00 for a pH probe with the sensor board. It is “mystery” because it seems that there are multiple Chinese manufacturers producing the same board but I can’t really find out which company actually own the design. I bought it anyway with the thinking that if I could understand how the pH probe works and with a little bit of “reverse-engineering” of the circuit design to help me to have better understanding of the circuitry, then I should be able to figure out on how to make it work. This fits my tinker spirit well… Other than those three commonly available pH sensor kits, there are others available in the market, but they are relatively niche with limited distribution. If you are interested on pH measurement or pH sensor board, you might to read further on A review on Seeed Studio pH and eC sensor kits – Part 1″. How pH probe work electronically? A pH probe consists of two main parts: a glass electrode and a reference electrode as shown in the picture below. I’m not very good at chemistry, so I won’t try to explain it that way, this pH theory guide provides very comprehensive explanation about the theory behind. In the nutshell, pH is determined essentially by measuring the voltage difference between these two electrodes. The pH probe is a passive sensor, which means no excitation voltage or current is required. It produces a voltage output that is linearly dependent upon the pH of the solution being measured. An ideal pH probe produces 0v output when pH value is at 7, and it produces a positive voltage (a few hundred mili-volts) when pH value go down, and a negative voltage level when pH value go up, causing by the hydrogen irons forming at the outside (and inside) of the membrane glass tip of the pH probe when the membrane comes into contact with solution. The source impedance of a pH probe is very high because the thin glass bulb has a large resistance that is typically in the range of 10 MΩ to 1000 MΩ. Whatever measurement circuit connect to the probe requires to be high-impedance in order to minimise the loading effect of the circuit. Hardware – The pH sensor board explained The pH sensor board that I bought came without any user guide, schematic or example code. I asked the small Chinese vendor for information but in vain. I decided to “reverse-engineering” the schematic diagram but eventually I find the schematic diagram at the attachment of this Arduino forum discussion. The pH sensor board can be divided into 3 different sections based on its functionality. I colour the three key sections with different color for discussion here. pH Measurement Circuit The light green section with the TLC4502 high-impedance operation amplifier basically consists of a voltage divider and a unity-gain amplifier. The pH output(Po) provided an analog output for pH measurement. As pH probe swing between positive and negative voltage, and since TLC4502 is operate with single power source, half of the TLC4502 is used as a voltage divider to provide a reference voltage of 2.5v to “float” the pH probe input so that the output of Po will be +/-2.5v based on pH value. A potentiometer RV1 is used for calibration purpose that I will further discuss later. This part of the circuit is well-designed and it is all it needed for measuring the pH value. The other parts of the board in my opinion are not well designed and sort of in the category of “nice-to-have” and not essential. pH Threshold Detection Circuit The yellow section provides a pH threshold detection/notification circuit. For example, you could adjust the potentiometer RV2 so that when pH level reach a threshold level (e.g. say 7.5), the RED LED D1 will be turned on (Digital output Do changed from high to low). Alternatively, you could use it to detect the lower pH level threshold, say, when pH value is below 5.5, the RED LED will be turned off and Do changes from low to high. But you can’t set both lower and upper thresholds with this circuit. In my opinion, it will be easier to just use software solution than this hardware solution for threshold detection. Temperature Reading Circuit The light blue/cyan section of the board consists of 1 and a half LM358 OpAmp, and provides an analog reading at To. U2B of LM358 acts as a not so accurate voltage divider and provides a voltage reference of 2.5v to a Wheatstone bridge that consists of R13 – R15 and a thermistor TH1. The U3A behave as an differential OpAmp, the output is then pass through a low-pass filter and further amplified by a non-inverting OpAmp U3B. This entire circuit has nothing to do with pH measurement, at least not directly. I will talk about this toward the end of this article. The sole reason for measuring temperature in the context of measuring pH value is because that pH curve slope changes when temperature change between 0 to 100 degree Celsius. It is therefore important to measure the temperature of the solution, and add temperature compensation factor into the pH calculation. One thing interesting is that all the manufacturers for this board design that I saw in the market had the thermistor solder on the board instead of having a water-proof thermistor probe like the one that I described in my my previous post. By soldering thermistor on-board, that means the thermistor is measuring ambience temperature near the board instead of the temperature of the solution where pH was measured, this simply doesn’t make sense. This makes me think that all those Chinese manufacturers are simply copying the design from a circuit diagram or reverse-engineering without understanding the purpose of having the thermistor for temperature measurement in the context of pH measurement application. Now I studied and understand the circuit diagram, it is time to calibrate the pH sensor and write some code for measuring the pH value! How to calibrate the pH sensor? As discussed previously that by design the pH probe oscillates between negative and positive values. When the pH reading is at 7.0, the pH output is offset by 2.5v so that both negative and positive values generated by the pH probe can be represented as positive values in full range, this means that when pH is at 0, the Po would be at 0v, and when pH is at 14, the Po would be at 5v. In order to make sure that when pH is at 7.0, we can calibrate the reading to make sure that Po will be at 2.5v by disconnecting the probe from the circuit and short-circuiting the inner pin of the BNC connector with the outer BNC ring. With a multimeter measure the value of Po pin and adjust the potentiometer to be 2.5V. Don’t worry if you don’t have a multimeter, you can write an Arduino sketch to read the analog input by connecting the Po to analog input A0 of the Arduino. ph_calibrate.ino #include <Arduino.h> const int adcPin = A0; void setup() { Serial.begin(115200); } void loop() { int adcValue = analogRead(adcPin); float phVoltage = (float)adcValue * 5.0 / 1024; Serial.print("ADC = "); Serial.print(adcValue); Serial.print("; Po = "); Serial.println(phVoltage, 3); delay(1000); } Connect Po to Aanalog input A0 on Arduino, and G to Arduino GND. Run the Arduino sketch, and open the Serial Monitor of Arduino IDE to observe the reading, slowly adjust the potentiometer RV1 (the one near the BNC connector on the board) until the Po reading equal to 2.50v. This is assuming that all the pH probe are equal and will produce exactly 0v at pH reading of 7.0, but in reality all probes are slightly different from each other, especially for consumer-grade pH probe. Temperature also affect the reading of pH sensor slightly, so the better way is to use a pH buffer solution of pH=7.0 to calibrate the probe. All the buffer solution will have the temperature compensation information on its package that you could factor-in for your calibration. pH buffer packages for calibration purpose available in liquid form or in powders form, liquid pack is easy to use but powders pack is good for storage. These solutions are sold in different values but the most common are pH 4.01, pH 6.86 and pH 9.18. pH values are relatively linear over a certain range (between ph 2 to ph 10), we need two calibration points to determine the linear line, and then derives the slope of the line so that we could calculate any pH value with a given voltage output (see Figure 2 chart above). What value of pH buffer to use for this second calibration depends on your application, if your application is for measuring acidic solution, use buffer solution for ph=4.01 for the second calibration; buf if your application is mostly for measuing basic/alkanine solution, use buffer solution of ph=9.18 for the second calibration. In my case, as hydroponic for vegetable grow tends to be slightly acidic with ph ranging between 5.5 – 6.5, I use ph=4.01 buffer solution for my calibration. To avoid cross contamination, dip the probe in distill water for a couple of minites before dipping it in different buffer solutions. For increase the accuracy, let the probe stay in the buffer solution for a couple of minutes before taking the reading as the result. Use the same Arduino sketch to get the voltage reading for pH=4.01, and write down the voltage value, in my case, the voltage is 3.06 @ pH=4.01. The voltage readings at ph of 4.01 Vph4 and at pH of 7.0 Vph7 allows us to draw a straight line, and we can get the Voltage change per pH value m as: m = (ph7 - ph4) / (Vph7 - Vph4) / m = (7 - 4.01) / (2.5 - 3.05) m = -5.436 So the pH value at any voltage reading at Po can be derived with this formula: pH = pH7 - (Vph7 - Po) * m i.e. pH = 7 - (2.5 - Po) * m Measure pH value With the formula, we can create the Arduino sketch to measure the pH value based on the voltage reading at the Po. #include <Arduino.h> const int adcPin = A0; // calculate your own m using ph_calibrate.ino // When using the buffer solution of pH4 for calibration, m can be derived as: // m = (pH7 - pH4) / (Vph7 - Vph4) const float m = -5.436; void setup() { Serial.begin(115200); } void loop() { float Po = analogRead(adcPin) * 5.0 / 1024; float phValue = 7 - (2.5 - Po) * m; Serial.print("ph value = "); Serial.println(phValue); delay(5000); } How about Temperature Measurement? As I mentioned before it doesn’t make sense to measure the ambience temperature near the PCB, so the first thing I did is de-solder the on-board thermistor and replace it with one of those water-prove thermistors. A Wheatstone Bridge circuit is nothing more than two simple series-parallel arrangements of resistances connected between a reference voltage supply and ground producing zero voltage difference between the two parallel branches when balanced. When one of the arm of the resistance arrangements consists of a thermistor, its resistance changes as temperature changed, causing the imbalance of two resistance arms and a voltage difference developed between the two parallel branches in according to the change of the thermistor resistance which is directly related to the change of the temperature. Specifically to this circuit, the voltage reference is provided by U2B which formed a voltage divider and produce a reference voltage (let’s called it Vref) of 2.5V at pin 7 of U2B. According to the characteristics of thermistor, the thermistor will have a resistance of 10k-ohms at temperature of 25 degree Celsius. The Wheatstone Bridge will be balanced and the output voltage Vd of the Wheatstone Bridge at the terminals of resistors R16 and R18 will be zero and will swing above and below 0 volt when temperature changes. The Vd is then amplified by U3A which seems to be a differential amplifier, the U3B is a typical non-inverting amplifier. As I’m not quite sure about the gain of U3A so I decided to ask at Electrical Engineering StackExchange, and I got my questions answered within an hour. The circuit has a total gain of 14.33 when the thermistor is at 10k (i.e. when temperature is at 25 degree Celsius). However, the gain of U3A will change when the thermistor resistance change, obviously this is not a very good design. I also got confirmed my suspicion that there is a missing 20k resistor between between pin 3 of U3A and ground on the circuit diagram, interestingly the circuit board is designed to have this resistor, but where the resistor is supposed to be is left empty (why?). Further inspect the circuit I noticed that the R12 on the board is actually having a value of 51.1k instead of 100k as shown in the circuit diagram. So the over gain will be 1.33+5.11+1=7.44. We can derive the Vd based on the measured voltage of To, and further derive the value of resistance of TH1 at the temperature where To is measured: Vd = To / 7.44 Vd = Vref * (R14 / (R14 + R15)) - Vref * (R13 / (R13 + TH1)) Absolute temperature T based on Steinhart–Hart equation for thermistor calculation can then be derived from: T = 1 / (1/To + 1/B * ln(TH1/Ro)) Where: T is the absolute temperature to be measured in Kelvin; To is the reference temperature in Kelvin at 25 degree Celsius; Ro is the thermistor resistance at To; B is Beta or B parameter = 3950, provided by manufacturer in their specification. In theory, the primary benefit of Wheatstone Bridge circuit is its ability to provide extremely accurate measurements in contrast with a simple voltage divider, as a voltage divider is often affected by the loading impedance of the measuring circuit. In actual application, the accuracy of the Wheatstone Bridge is highly depend on the precision of the resistors used to form the Wheatstone Bridge, the precision of voltage reference as well as the circuit that connected to the Wheatstone Bridge. Although I figured out the formula on how to measure the temperature, I did not write the code to calculate the temperature, as the gain of U3A will vary as the value of the thermistor varies in according to the temperature. This make the reading result almost unpredictable and I will probably not use this circuit for measuring the water temperature without further modifying the design. In Summary Overall, this pH sensor board has a good pH measurement circuit design, the rest parts of the circuit are quite useless and a little bit over-engineered. By eliminating the bad part of the circuit design and kept the good part, it could be simpler and maybe slightly cheaper than current design for a pH sensor board. Related topic: A review on Seeed Studio pH and EC sensor kits – Part 1(PH)”. A review on Seeed Studio pH and EC sensor kits – Part 2(EC). 22 comments by readers Dear Sir! It’s really helpful topic. Thank you so much for your sharing. I have been making 1 pH meter kit since sep-2019 up to now, so I have 1 problem; I tested with pH7.01 buffer solution, it show me pH = 7.04 at 2.51V; and with pH4.01 buffer solution I got pH = 3.98 at 3.02V. Now I take 1 liter waste water sample from the collection pit (Wastewater system) then tested, it displayed pH = 7.62 at 2.49V ….. but I droped the pH probe to test directly with waste water treatment system then it showed pH = – 6.68 at 4.83V. Can you explain for me why the “Po” voltage out so high up from 2.49V to 4.83V with the same water sample? Thank you so much, I don’t know what caused it, but ph=-6.68 at 4.83v is basically out of the linear range that the probe could accurately measured, or your probe is not connected. Also, don’t measure pH on running flow as the value will not be accurate, and if you have a TDS probe, put it away from pH probe. Thanks you for the nice article. I would like to add the following. When the pH sensor module is used with a micro controller like Arduino, calculations and calibrations can be simplified by eliminating the voltage calculation. When calibrating with 7.0 pH solution, float phVoltage = (float)adcValue * 5.0 / 1024 = 2.5only if adcValue = 512so RV1 can be tuned to that. The slope can be used in equation I think voltage calculation is useful when someone designs a circuit without using software and in which case, the signal from ‘Do’ can be used to trigger a device like a relay to turn something on/off. Depending on the circuit, active-high or active-low relay can be used. Thanks for the comment. On ‘Do’, it has Hysteresis effect, meaning if you expect it to trigger something when a value is reach to a certain point, it does not necessary will trigger at the same point when the the value falling back. So personally, I prefer to use software solution than the ‘Do’. I too prefer the software approach. Many of these boards are based on expired patents, that’s why they are ‘low-cost’. Which means they are old and back in the days, software were not widely used. Most circuits were designed and run based on hardware. I could think of a few applications that might utilize the function. Of course, accuracy can be achieved, not at the speed we are accustomed to but enough to satisfy the needs. Hi, great article. The ph changes with temperature. how would we apply temp compensation to the ph readings. Thanks The PH variation due to temperature is less significant than EC measurement, for example, ph of 4.00 at 25 degree C will be around 4.01 @30 C and increase by 0.01 for approx every 5 degrees temperature change, but it remains at about 4.00 in the range between 10 – 25 C. So I will simply ignore the temperature factor in ph measurement unless you measuring ph about 30 degree C or below 10 degree C. I won’t trust the temperature measurement circuit for this particular board for the reason that I mentioned in my article. If you really want to have temperature measurement, get a water-proof DS18B20. Hi, there Check out below linked pdf file that gives a formular on how to apply temp compensation. Hi Henry, I am not an expert but I want to build a smart PH and EC sensor which helps me regulate the water nutrients and PH levels for an indoor hydroponics system. Can you guide me on an economical way to achieve this? Regards Chan Hi, There Thanks for the post and discussion, I too duplicated the whole circuit for PH measurement, as well as adding water-proofing DS18B20 and EC sensor on PCB, in which I planned to build an eco measuring system for in door plant growing measurement. Everything worked fine but I got two problems: 1. Big variations of PH value output measured alone without DS18B20 and EC sensor, not sure why, since I don’t have oscillator, couldn’t identify where the variation coming from, maybe you could help with it. 2. All 3 sensors putting together, DS18B20 and EC sensors working fine, but PH sensor got significant interference by DS18B20 and EC sensor, which seems to be reasonable(each of them emitted potential into solution with impact the micro-voltage level PH Sensor. I will have to isolate/disconnect DS18B20 and EC sensor through Arduino software and hardware switch circuit. The calibration of PH sensor went well, which I used standard buffer resolution, output the data exactly as expected, but when I put it into a glass with tap water, it gave big variation, as I mentioned above… Would appreciate if anyone could help me out. The things that I could think of 1) PH sensor doesn’t work well with running water or in a current stream. 2) Which MCU you are using? Arduino? I would suggest that you add a 10uF and 100nF capacitors in parallel at PH sensor board’s Vcc and Ground. 3) If you are using a PH sensor board other than this design, make sure the output impedance of the board that feed into ADC of Arduino is less than 10k ohm, you can read about my recent experience on evaluation Seeed Studio’s pH sensor board for the problem that I encountered. I don’t have any issue of having DS18B20 temperature sensor together with PH sensor, actually in my setup, my PH sensor is right next to the DS18B20 with less than 1cm separation, DS18B20 should stay away from EC sensor as EC sensor generated a magnetic/electrode field around its tip. Thanks for the quick response, Henry. In my view, DS18B20 and EC sensor once power on having direct impact to PH sensor. As confirmed by your comment, I can understand the EC sensor having impact to PH sensor, DS18B20 shouldn’t have impact to PH sensor, I don’t understand this part, need to figure out. Another alternative is to use water-proof thermal resistor B 3950 with screwnut for easy installation. I am actually making a board with all components on including PH, EC sensor circuits, with ATMEGA328P MCU simulating as Arduino Nano. To add 10uF and 100nF to VCC of the Amplifier CD4052 is agood idea, somehow I missed it. The layout needs to be improve to have the CD4052 closer to PH BNC slot. I haven’t experienced the ADC issue with impedence problem, will check out your blog later for reference. Great comments, thanks Henry. I mentioned in my research that the temperature sensor is being removed and I installed another sensor Can the sensor be left in place and the measured temperature adjusted? What is the effect of temperature on the pH value? On pH dependency on temperature, you should consult your probe manufacturer’s user guide. In general, for instance at 0 °C the pH of pure water is about 7.47. At 25 °C it is 7.00, and at 100 °C it is 6.14. For practical application, it depend on where is your region, I lives in tropical region, and see my answer to question 3 above. Hi Henry! First of all, i want to say that its very very useful information – really apreciate it. I making final year project – automatic control system for hydroponics. As you wrote at the beginning of the post, there is very little – almost none information about how to calibrate properly these sensors. Your post is very informative, but still i couldn’t figure out the point about temperature compensation. Can you please clearify it? From what i gave understood, is that you didn’t wrote any code for temp. compensation due to the fact that the built in thermistor is useless- beacuase it measures ambient temp. (not in water). I do want to make precise calibration on my EC Sensor, and i got submersible DS18B20 sensor. Should i left the T1 (Temperature out) pin on the EC sensor unconnected? Or if i don’t want to use it – connect to GND? Any help is appreciated. I gave a few reasons on why I didn’t do temperature compensation on pH measurements in the comments, you can scroll up and read it. For the EC, I’m not sure what EC sensor you are using, so I can’t really comment on how it should be connected, in general, if you don’t use it, leave it open should be fine. You can also read my another post which uses a water-proof DS18B20 with an EC sensor. Hi Henry, I found your article really very interesting. I followed all the steps in your article, I realized that when I short the BNC signal to make the 2.5 volt adjustment, the maximum range I can get by adjusting the trimmer is 2.4- 4.99 Volts. My board does not go below 2.4 volts. Do you think it is defective? Hello, I have the same issue while calibrating the minimum I can get it 2.6 volts, I can’t event reach 2.5… For me the range is 2.6-4.99 You can do what i did, while calibrating the minimun I only was able to get 2.59V, to be able to get get proper readings of pH value i just get left the bnc potentiometer at the minimum possible and got the readings of the powder solutions: 6.86 and 4.01. And with this using the equation y = ax+b i could get the correct values of pH. Ex: pH = a*xV + b 4.01 = a*(3.04)+b 6.86 = a*(2.54)+b Then i get the result y = -5.7*21.338 and thats it. Hi, I have the same problem! When I shortened the BNC to make the 2.5 volt adjustment, the ADC value only goes down to 831 and not 512? I can’t adjust RV1 further to get to 2.5 volts! Do you think the board is defective? Thanks for your help! Hi, when trying to calibrate the sensor I get an incorrect voltage from PO when trying to adjust it with the (BOATER 3296) potentiometer. I try to read it from the PO pin but it just reads ADC = 872; Po = 4.262V. The actual readings can be measured at the external part of the BNC probe or at the potentiometer. They show the correct voltage that I’ve tuned to 2.5V for the 7 Ph baseline. Is this a fault with the BNC interface? I am running this code to measure the voltage at PO, the potentiometer and the BNC connector #include const int adcPin = A0; void setup() { Serial.begin(115200); } void loop() { int adcValue = analogRead(adcPin); float phVoltage = (float)adcValue * 5.0 / 1023; Serial.print(“ADC = “); Serial.print(adcValue); Serial.print(“; Po = “); Serial.println(phVoltage); delay(1000); } As far as I can see, there are no data of C values of the schematic. Did any of you know what the values are?
https://www.e-tinkers.com/2019/11/measure-ph-with-a-low-cost-arduino-ph-sensor-board/
CC-MAIN-2021-49
refinedweb
4,763
62.58
On 8/5/12 5:30 PM, Gilles Sadowski wrote: >>>> [...] >>>>. >> Why? RandomData is pretty descriptive and exactly what these >> methods do. They generate random data. > Fine... > But can we plan to merge "RandomData" and "RandomDataImpl"? Definitely want to do this. Unfortunately, it is incompatible change. I guess we could create a new class named something else, deprecate both of the above and have RandomDataImpl delegate to the new class. How about RandomDataGenerator in the random package as the new class? > >>> And we should also find a way to remove the code duplication (in the >>> distribution's "sample()" method and in the corresponding "next..." method). >> +1 - the implementations can be moved. When I last looked >> carefully at this, the distribution methods were delegating to impls >> in RandomDataImpl. What we have agreed is to move the impls into >> the distribution classes for the basic sampling methods. That >> should not be too hard to do. I will do that if no one beats me to it. > I did it in r1363604. > What still needs to be done is redirect the "next..." to the "sample()" > method of the appropriate distribution. > But I had already raised the issue of efficiency: each call to e.g. > nextInt(p, q) > will entail the instantiation of a UniformRealDistribution object. > > What could be done is > 1. create a static method in the distribution class > 2. have the "sample()" method call that one > > --- > public class UniformRealDistribution extends ... { > // ... > > public static int nextInt(RandomGenerator rng, > int a, > int b) { > final double u = rng.nextDouble(); > return u * b + (1 - u) * a; > } > > public int sample() { > // Here "random", "lower" and "upper" are instance variables. > return nextInt(random, lower, upper); > } > } > --- > > And "nextInt" from "RandomDataImpl" would also be redirected to the static > method in the distribution class: > > --- > import org.apache.commons.math3.distribution.UniformRealDistribution; > > public class RandomDataImpl ... { > // ... > > public int nextInt(int lower, int upper) { > return UniformRealDistribution.nextInt(getRan(), lower, upper); > } > } > --- > >
http://mail-archives.apache.org/mod_mbox/commons-dev/201208.mbox/%3C501F1ECE.1060903@gmail.com%3E
CC-MAIN-2014-23
refinedweb
313
52.26
Posts from Mark Ng 2009-04-19T11:46:28Z Mark Ng Startups - a competition ? 2009-04-19T11:46:28Z Mark 80 Image cc-licensed from flickr So, on Thursday, I attended Bournemouth Startup Meetup, organised b[...] <img src="" alt="Boot Strap" /> <p style="font-size: 0.5em;">Image cc-licensed from <a href="">flickr</a></p> <p>So, on Thursday, I attended <a href="" title="startup meetup">Bournemouth Startup Meetup</a>, organised by <a href="" title="Luke Williams (socialtech) on Twitter">Luke Williams</a>. I've been a regular attendee of these events since they started, and they've produced some interesting chatter and I've met some interesting people there. This week, it was a small turnout, but that helped us focus on an agenda. At barcamp bournemouth, <a href="" title="Jonathan Markwell (JonMarkwell) on Twitter">Jon Markwell</a> spoke about his impending idea for a Brighton based startup competition, so our discussion was focussed on how we could do something similar for Bournemouth and the rest of Dorset.</p> .</p> <p>I've thought about this a bit. I think there is a lot of value in a competition like this, and I've been working towards start-up ideas in my spare time, too (<a href="" title="Twitfave">twitfave</a> being my most successful effort so far). However, there's really a need for a <a href="" title="Y Combinator">Y Combinator</a> like entity in the UK, to enable people to take ideas to fruition full time.</p> <p.</p> <p.</p> <p.</p> <p>I think the teams should be built of :</p> <ul> <li>three people who work on the project full time, living and working in the same house</li> <li>an involved and interested investor</li> <li>mentors, available for advice for the team</li> </ul> <p.</p> <p><a href="" title="Tom Harvey (tomharvey888) on Twitter">Tom Harvey</a> had the rather brilliant idea that if we hurried to make this happen, you could rent sections of a universities halls over the summer at a cheap price. This has the advantage of potentially putting all of the startup teams in one place.</p> <p>I have a couple of ideas I'd love to work on if we could make something like this happen - who else would be interested in it ?</p><img src="" height="1" width="1"/> Travelodge FAIL 2008-07-28T16:44:54Z Mark 79 So, after a fantastic time at the Eden Sessions, I turned up at my pre-paid, prebooked Travelodge ro[...] <p>So, after a fantastic time at the <a href="" title="The Eden Project - Home">Eden Sessions</a>, I turned up at my pre-paid, prebooked <a href="" title="Travelodge" rel="nofollow">Travelodge</a> room in Plymouth. Or so I thought. Upon arriving in the small hours, I was told that there was no room for me. Obviously, quite angry, I demanded that the staff fix the situation somehow.</p> <p>After they called around for a while, they called their manager, Hannah Dennis. I was told that because the only other hotel they could find (the <a href="">New Continental</a> - which was fine) refused to take a purchase order from the Travelodge for the room, that I was expected to <strong>pay for the second room myself !</strong> At this point, I was offered a refund of my room, but told that they couldn’t pay me the difference for the more expensive room that they had found. I refused a refund at this point, because I expected to talk to the manager the next morning and gain a full refund, the difference and some form of compensation.</p> <p>The next morning, after being treated much better by the staff of the other hotel, I came back to sort out what was happening with my compensation. I was told by the manager that she was very sorry, but that it was all the fault of the head office team who overbook the rooms. I told her that someone could and should have phoned me, as my mobile number was on the booking. She also apologised for that. I told her what I expected in terms of compensation. She told me that it was up to head office to deal with that, as it was their fault for overbooking and not booking me into another hotel room earlier. I asked for at least the cost of their room back on my card, and was told it would be "easier" to deal with the refund and the compensation all at once. I was assured that the team dealing with my complaint would fix this on Monday morning, on their return to the office.</p> <p>On Monday, I called Hannah again, who had stopped being her apologetic and helpful self, and was instead belligerent and rude. She told me that the only way I could get in contact with the people who were dealing with my complaint was by email or post, and that it could take 7 - 10 days to process my refund. I told her that that wasn’t good enough, and that I expected a refund that day. She told me that I was offered a refund when I was outbooked, and that she wouldn’t be able to process a refund then - this despite me asking <strong>her directly</strong> for the refund on the Saturday morning, when I was told it would be "easier" to get a refund from the head office department (who I was told I was not allowed to talk to).</p> <p>Travelodge - there are a couple of lessons you need to learn. Many other corporates could do with learning these lessons, too.</p> <ul> <li>"Head Office" is not an excuse. When a complaint needs to be escalated, the people dealing with that complaint should be <strong>available to the customer</strong></li> <li>Treating your customers like they are too insignificant when they have a major complaint is <strong>bad for business</strong>. In the age in which we are <strong>all hyperconnected</strong>, word travels fast and one pissed off consumer can do major harm to your business. I guarantee you that this blog post has done ten times more financial damage to your company than dealing with my complaint quickly and efficiently, at the point of the problem would have.</li> <li>When you cock something up, you fix it then and there. Don’t sit in your corporate ivory towers and promise a "7-10 day resolution time". It’s not acceptable - you wouldn’t stand for it and neither will I.</li> </ul> <p>So, I ponder my next course of action should the email I sent to customer services (which is the only way I am apparently allowed to contact travelodge) not achieve a correct response. Should I talk to the OFT ? Should I talk to my bank about a chargeback ? Should I call the local press ? Your answers are welcome and indeed solicited..</p> <img src="" height="1" width="1"/> dotdorset launch and new tool playtime (django and git) 2008-07-23T00:57:30Z Mark 78 So, amongst several meetings with fellow dorset web types, we discussed starting a portal to act as [...] <p>So, amongst several meetings with <a href="">fellow dorset web types</a>, we discussed starting a portal to act as a gathering point for people like us. The result of which led to a <a href="">google group</a>, an <a href="">upcoming group</a>, a <a href="">presence on twitter</a> and finally, <a href=""><strong>the portal itself, dotdorset.org</strong></a>.</p> <p>Because this was a non-paying non-client project, I took the opportunity to use a selection of tools I hadn't used for anything significant and wanted some more experience with. It's nice to have these projects from time to time, where you can <strong>take risks</strong> you wouldn't otherwise be able to take with client work. The new tools I used for this project were :</p> <ul> <li><a href="" title="Python Programming Language -- Official Website">Python</a></li> <li><a href="" title="Django | The Web framework for perfectionists with deadlines">Django</a></li> <li><a href="" title="Git - Fast Version Control System">Git</a></li> </ul> <p>I'm lying when I say Python is new to me. <strong>I've had a fear of it for a long time</strong>. Upon entering a job some years ago, I was handed two failing projects written using <a href="" title="Zope.org">Zope</a> and <a href="" title="Plone CMS: Open Source Content Management">Plone</a>. This was my first significant experience with Python, and it was so bad that whenever I saw Python code afterwards, <strong>it made me shudder</strong>.</p> <h3>Django</h3> <p>However, I've been meaning to get over that and give Django a try for some time now. I've noticed quite a few clever people I know trying it out and being happy, and I've also been lucky enough to see <a href="" title="Simon Willison’s Weblog" rel="contact">Simon Willison</a> present on it (very clever chap who is one of the creators of Django). Also, it's from a publishing background, and some of my clients are in that arena, and earlier parts of my career were spent working for publishing companies.</p> <p>I had a real easy time getting Django set up on my macbook. However, other people checking out the code to contribute bits and pieces had a lot less fun than I did. I'm still not sure what caused their problems.</p> <p>Developing in Django was, for the most part, <strong>a real pleasure</strong>. It became obvious that significant portions of <a href="" title="symfony | Web PHP Framework">Symfony</a> was either inspired by Django or inspired Django (I suspect the former). This made things alot easier for me, as the learning curve was made somewhat smaller. However, the model layer was a lot more lightweight and easy to work with than <a href="">Propel</a> (though I like that Propel is so easy to reverse engineer because of the way it generates base classes).</p> <p>I also really liked the way Django templates work. Their concept of extending base templates worked very nicely. I did find the way that template directories are organised by default to be a little weird, but as the settings files accept python, you're able to change that default behaviour quite easily yourself.</p> <p>The Django admin interface is a <strong>work of genius</strong>. I'm going to say no more on this topic, as this is all you need to know.</p> <p>I was <strong>much less impressed</strong> with deploying Django into a production environment using mod_python though. I've become so used to mod_php just working that I was surprised how much messing around with interpreters and locations for egg files and environment variables there was. And then, when we finally deployed live, the whole server ran out of memory and died (still not sure what caused this). I've been recommended <a href="" title="modwsgi - Google Code">mod_wsgi</a> and <a href="" title="Overview — Phusion Passenger™ (a.k.a. mod_rails / mod_rack)">phusion passenger</a> as possible alternatives for deploying to. I'd really love to hear some more opinions and tips about how to deploy Django well (both from a what servers point of view and also what deployment tool - maybe <a href="" title="Capistrano: Home">capistrano</a> ?)</p> <h3>Python</h3> <p>Django is a much nicer introduction to using Python than hacking around Zope was. The concise syntax is nicer to read than PHP, however, I don't think I've really yet understood what it means to be "<a href="" title="What is Pythonic?">Pythonic</a>". I wonder how much a Python veteran would scream at my code.</p> <h3>Git</h3> <p>Git is a distributed version control system. For the workflow on this project, there were only two remarkable things - it was much faster than svn and it was easier to hold repositories in multiple places. I haven't quite worked out which of the methods I'll use to replace svn:externals yet, though.</p> <p>Overall, this project as a chance to try out new tools has been a great success. What new tools have you introduced recently, and was it a positive experience ?</p><img src="" height="1" width="1"/> Opentech 2008 - <- revolution this way 2008-07-06T15:01:50Z Mark 77 So, I went to opentech 2008 yesterday. There was quite a lot of exciting stuff going on there, an[...] <img src="" alt="Picture - Marxism is not here (toward SOAS), revolution this way, however (toward Opentech)" /> <p>So, I went to <a href="" title="Open Tech 2008 - 5th July in London.">opentech 2008</a> yesterday. There was quite a lot of exciting stuff going on there, and I got to see quite a few people I know !</p> <p>Most interesting speaker I hadn't seen before was the somewhat legendary <a href="" title="Danny O'Brien's Oblomovka">Danny O'Brien</a> talking about the formation of the <a href="" title="The Open Rights Group">open rights group</a> and his "Living on the Edge" presentation.</p> <p>I also got to meet Ben Goldacre, whose work at his <a href="" title="Ben Goldacre" rel="contact">Bad Science blog</a> I admire alot.</p> <p>Lunch with <a href="" rel="contact">Jon Lim</a> and <a href="" rel="contact">Tom Morris</a> and some others was great.</p> <p>The most interesting thing at the conference for me was getting more information about <a href="" title="Show Us a Better Way">show us a better way</a>, which I became aware of in the last week or two. This is an initiative by the <a href="" title="Power of Information Task Force">Power of Information Taskforce</a>. They've gotten lots of sources of data from different parts of government, and made them available for people like myself to make mashups with. The most interesting data set to me was the <a href="" title="London Gazette">London Gazette</a> - a sample set of which was available as zipped XML, but there is an ongoing project to make the current site include <a href="" title="RDFa Primer">RDFa</a>, which <a href="" rel="contact">Jeni Tennison</a> and John Sheridan are working on.</p> <p>Also, John mentioned the <a href="">Public Sector Information Unlocking Service</a>, which is a service to help people get information in the right formats or with the right licensing where they're entitled to it. It's really good to see government slowly catching up with providing data for re-use.</p> <p>Interestingly, they also have a prize fund available to help build ideas (of which there <a href="">already a large number !</a>).</p> <p>In the evening, I had the pleasure of eating at Strada with a bunch of people including <a href="" rel="contact">Rain</a>, <a href="" rel="contact">Ian Forrester</a>, <a href="">Emma Persky</a>, <a href="">Tom Morris</a>, <a href="" rel="contact">Jeni Tennison</a>, <a href="" rel="contact">coldclimate</a>, David McBride, Glyn Wintle, <a href="">Sheila Thomson</a> and some others, which was a perfect end to the day (and the less mentioned about having to sleep for three hours in my car at Fleet services on the way home, the better.)</p> <p>Photo courtesy of "rooreynolds" on flickr ( <a href=""></a> ) creative commons</a> </p><img src="" height="1" width="1"/> timelapse screencasting + isight 2008-07-04T03:34:20Z Mark 76 DISCLAIMER: All of this is a horrid hack. Don't blame me if it pees in your cornflakes. Test ti[...] <p><strong>DISCLAIMER: All of this is a horrid hack. Don't blame me if it pees in your cornflakes.</strong></p> <object width="400" height="300"> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="movie" value="" /> <embed src="" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object><br /><a href="">Test timelapse</a> from <a href="">Mark Ng</a> on <a href="">Vimeo</a>. <p>So, alot of you will have seen the <a href="">Carsonified timelapse videos</a> showing their team developing their new web application <a href="">Matt</a>.</p> <p>When I saw this, the first thing I thought about was using this as a means of tracking my productivity. Looking back on a days work after you've done it can give you a lot of hints as to where you're wasting your time. So, I set about working out how to make these myself.</p> <p>I butchered some bits and pieces of Applescript floating around the internet to take the pictures during the day. Note: I work with my laptop screen and a desktop monitor, and chose to record both of those (screen1 and screen2). The applescript is below :</p> <pre> <code> set save_location to ¬ (choose folder with prompt "Choose where to save screenshots") on quit display dialog "Stop recording?" buttons {"No", "Quit"} if the button returned of the result is "Quit" then continue quit end if end quit repeat with shotcount from 1 to 1440 do shell script "screencapture -C -tjpg -x " & ¬ quoted form of POSIX path of save_location ¬ & "screen1-`date '+%y%m%d.%H%M'`.jpg "& quoted form of POSIX path of save_location &"screen2-`date '+%y%m%d.%H%M'`.jpg" do shell script "/Applications/isightcapture "& quoted form of POSIX path of save_location &"face.`date '+%y%m%d.%H%M'`.jpg" delay (60 * 1) -- delay 1 minute end repeat </code> </pre> <p>This expects a freeware program <a href="">isightcapture</a> to be in your /Applications/ directory, and you'll need to compile this script into an application and run it (it's also a bit flaky and doesn't quit properly - I don't know applescript at all and built this script in about 15 minutes.)</p> <p>So, after running this for a whole day, you'll have a folder full of timestamped images. For the next part, you'll need imagemagick and mplayer/mencoder installed. I got both of these from <a href="">macports</a>, but you may choose to get them in some other manner.</p> <p>To process the folder, first you need to merge the images together so that the screens and face appear in the same image. I did this with a shell script like this :</p> <pre> #!/bin/bash mkdir processed for time in `ls face*.jpg | awk -F"." '{ print $2 "." $3 }'`; do montage -geometry '600x600' -shadow -background none -tile 2x face.$time.jpg screen1-$time.jpg screen2-$time.jpg processed/montage.$time.jpg done </pre> <p>You may wish to mess around with the geometry and tiling, background, etc. You can take a look at <a href="">this page about the montage tool in ImageMagick</a>.</p> <p>Finally, you need to use mencoder to make a movie ! Optionally, you can add an MP3 or ogg file for background music. Inside the processed image folder, run the following (Messing around with the fps can be helpful.) :</p> <pre> mencoder mf://*.jpg -mf w=1280:h=800:fps=4.3:type=jpg -ovc x264 -x264encopts pass=1:bitrate=256 -audiofile /path/to/music.mp3 -oac mp3lame -o timelapse.avi </pre> <p>You'll be left with an video file. I built one for the afternoon of that day. Interestingly, it was a day that I had to go out and pick my car up from servicing and also get my hair cut, which shows me some good productivity drains right there !</p> <img src="" height="1" width="1"/> Dorset Web BBQ 2008-05-18T10:26:49Z Mark 75 Since moving to Bournemouth, I've not noticed much in the way of web community events going on here [...] <p>Since moving to Bournemouth, I've not noticed much in the way of web community events going on here or in the rest of Dorset, despite the fact that there do actually appear to be some talented people down here. In an effort to remedy this, I'm having a barbeque to try and get some people together. Hopefully, I'll get a chance to meet some new people and we can organise some more events ! If you're interested in coming along, <a href="">find details and sign up here.</a></p><img src="" height="1" width="1"/> Feedshaver - categorize your RSS feeds using opencalais 2008-05-07T21:11:08Z Mark 74 The application I built for the telegraph developer weekend (which won second prize) is live on the [...] <p>The application I built for the telegraph developer weekend (which won second prize) is <a href="">live on the internet</a>.</p> <p><a href="">Feedshaver</a> is an application written in <a href="">symfony</a> that utilizes Reuters <a href="">OpenCalais<.</p> <p>Oh, I might even get a design for it, too.</p><img src="" height="1" width="1"/> Over the Air conference summary 2008-04-05T23:19:08Z Mark 73 updated : added links to octobastard video and PrimeSky slides I'm back in Bournemouth after 2 da[...] <p><strong>updated :</strong> added links to <a href="">octobastard video</a> and <a href="">PrimeSky slides</a></p> <p>I'm back in Bournemouth after 2 days in London at <a href="" title="Over The Air">overtheair</a> at <a href="" title="Imperial College London">Imperial College</a>. overtheair was an interesting change in conference formats, as it was a hybrid of a more traditional speaker led conference, followed by a hack day, combined with the sleep-over common to <a href="" title="BarCamp wiki">Barcamps</a>.</p> <p>overtheair was, overall an excellent conference. I saw some excellent talks - in particular a UX panel which I found interesting. I also got the chance to get the new <a href="">Nokia Web Runtime</a> running on my N95.</p> <h2>Mobile and the Web</h2> <p <a href="" title="Royal Observatory : What's on : NMM">Royal Observatory</a> with <a href="" title="Future Platforms">Future Platforms</a>. I was excited that <a href="" title="Tom Hume" rel="friend met colleague">Tom Hume</a> and <a href="" title="Bryan Rieger — on design, devices and distractions…" rel="friend met colleague">Bryan Rieger</a> got a chance to <a href="">present on our experiences building this application</a>.</p> <p>Their presentation went really well, and it was nice that afterwards lots of people came and spoke to us with their questions about the project. I hope that some people left inspired to create web applications that work equally well on mobile, and start designing with mobile in mind.</p> <p>I spent the night hacking around with the web runtime, starting to build a twitter client (isn't this like mobile "Hello World" now ?) and playing werewolf - where I was a seer for the first time!</p> <h2>Octobastard</h2> <p>In the morning, I saw that the Future Platforms crew had been working hard on their competition entry all night and needed help killing a few problems at the end. I helped them out a bit.</p> <p <a href="" title="Arduino - HomePage">Arduino</a>, he decided to do what any of us would do - make a robot arm !</p> <p>They somehow went from a robot arm controlled by a wii nunchuk to a robot arm with a camera on it <a href="">controlled by a Sony Ericsson phone with accelerometers</a>.</p> <p>Just for fun, I added an <a href="">automated flickr stream</a>, nokia web runtime and iPhone clients to the mix. At some point it was noticed that we had 8 distinct parts of this project, and Tom christened the robot "octobastard". Afterwards, we decided to add more distinct parts, but the name stuck.</p> <p).</p> <p>I met a lot of interesting unfamiliar and familiar faces at overtheair (and somehow ended up going home with 8 beanbags that various attendees couldn't be bothered to carry home in my car !). I'd encourage anyone I met there to <a href="">stay in touch.</a> Also, if I can get links to some of the video and slides to do with these, I will add them later.</p> <p>As a side note - a couple of friends of mine made an utterly silly and <a href="">hilarious torchwood parody</a> while at the conference.</p><img src="" height="1" width="1"/> XHTML-MP and mobile hCard - Barcamp Brighton Presentation 2008-03-15T01:54:11Z Mark 72 These following code notes accompany my Barcamp Brighton presentation, and have example code to use [...] These following code notes accompany my Barcamp Brighton presentation, and have example code to use my WURFL API to create part of an hCard that enables mobile users to call and SMS directly from webpages. <h3>Controller section</h3> <pre> $ua = $_SERVER['HTTP_USER_AGENT']; if(array_key_exists('HTTP_X_DEVICE_USER_AGENT',$_SERVER) AND $_SERVER['HTTP_X_DEVICE_USER_AGENT']) $ua = $_SERVER['HTTP_X_DEVICE_USER_AGENT']; $device = unserialize(file_get_contents("")); </pre> <h3>View section</h3> <pre> <div class="tel"> <span class="type">Mobile</span> <?php if ($device['product_info']['is_wireless_device']): ?> <a href="<?php echo $device['xhtml_ui']['xhtml_make_phone_call_string'] ?>+447828794899" class="value">07828 794899</a> <a href="<?php if (!$device['xhtml_ui']['xhtml_send_sms_string'] OR $device['xhtml_ui']['xhtml_send_sms_string'] == 'none'): ?>sms:<?php else: echo $device['xhtml_ui']['xhtml_send_sms_string'] ?><?php endif ?>+447828794899">(sms)</a> <?php else: ?> <span class="value">07828 794899</span> <?php endif ?> </div> </pre> <p>A slideshare or similar link will follow when the actual presentation has been done !</p><img src="" height="1" width="1"/> Populating sfSimpleBlogPlugin from RSS 2008-03-05T21:03:21Z Mark 71 Finally, I've moved my personal site to Symfony. I've been using the framework extensively professi[...] <p>Finally, I've moved my personal site to <a href="" title="symfony Web PHP Framework">Symfony</a>. I've been using the framework extensively professionally for a bit more than a year. I decided to use <a href="" title="sfSimpleBlogPlugin - symfony - Trac">sfSimpleBlogPlugin</a> to handle the blogging requirements on the site. I could have spent a while fiddling with databases to get the old data from my site off, but instead I decided to follow a different route by pulling all of my data from my old RSS feed.</p> <p>Symfony provides the excellent <a href="" title="sfFeed2Plugin - symfony - Trac">sfFeed2Plugin</a> for producing and parsing various feed formats. I used this outside of the controllers in a batch task to do import my old data in. Here is the code :</p> <pre> <(); } </pre> <p :</p> <pre> php batch/rss_import.php </pre><img src="" height="1" width="1"/>
http://feeds.feedburner.com/MarkNg
crawl-002
refinedweb
4,499
59.53
by James Polanco Now that Adobe has released the public betas of Flash Catalyst and Flash Builder, you are probably hearing and seeing more about the new graphics file format, FXG. So what is FXG and why is it important to Flash Catalyst, Flash Builder, Flex 4, Flash Professional and Creative Suite users? In this article, we will explore the FXG file format, and discuss MXML Graphics and how it lead to the development of FXG; and then examine how FXG and MXML Graphics are leveraged within the new Flash Platform design and development paradigm. MXML Graphics and FXG are file formats that contain an XML-based declarative layout language that describes complex vector and bitmap based artwork. In simpler terms, FXG is a way to describe artwork such as shapes, lines, gradients, layers, blurs, drop shadows, etc. in an easy to read text-based format. One of the major challenges Flash Platform designers and developers face is the process of skinning and stylizing the look of a Flash based application to meet the designer's needs. A core issue that needed to be solved first was to move the drawing ability of Flash out of the ActionScript-only world and into an XML-based language. The best way to discuss this is with a simple example. Let's look at the old way of drawing in Flash (and Flex). In this example I want to create a basic 200 × 50 linear gradient that goes from black to white (see Figure 1). The ActionScript code to draw this gradient would look like this: var g:Graphics = this.graphics; var fill:LinearGradient = new LinearGradient(); var g1:GradientEntry = new GradientEntry(0xFFFFFF, 0, 1); var g2:GradientEntry = new GradientEntry(0x000000, 1, 1); fill.entries = [g1,g2]; fill.rotation = 90; fill.begin(g, new Rectangle(0,0,200,50)); g.drawRect(0,0,200,50); fill.end(g); Figure 1. A basic 200 × 50-pixel linear gradient. Not very pretty, is it? Not only is it hard to understand for a non-ActionScript developer, it's nearly impossible for a tool, such as Flash Catalyst, to understand and interpret this as a rectangle on screen. Thus, the first step towards FXG was the introduction of the MXML Graphics syntax in Flex 4. MXML Graphics solves this problem by exposing the Flash Player drawing API as a series of MXML tags. We would draw the same box in MXML Graphics as: <Group> <Rect x="0" y="0" width="200" height="50"> <fill> <LinearGradient x="0" y="0" rotation="90"> <entries> <GradientEntry color="#FFFFFF" ratio="0" alpha="1" /> <GradientEntry color="#000000" ratio="1" alpha="1" /> </entries> </LinearGradient> </fill> </Rect> </Group> This syntax is much easier to read, provides a clear parent/child hierarchy that the tools can easily parse, and it doesn't't have to be buried deep in the core of the Flex component code or your ActionScript. MXML Graphics not only exposes the basic drawing API, such as rectangles, ellipses, fills and lines, you also can access filter effects to create blurs and glows. You have the ability to have bitmap assets as children of the group and define its location, scale, z-depth, etc. In Flex, you can bind to ids, apply transitions and effects to the MXML Graphics content and treat it as would any first class MXML tag. It's a very powerful and exciting addition to the MXML language. So how does MXML Graphics relate to FXG? Well, all FXG does is wrap the MXML Graphics syntax in a header tag and enables the graphic description text to live in a stand-alone external file. For example, if I was to create the same graphic in Fireworks CS4 and export it, the generated FXG file looks like this: <?xml version="1.0" encoding="UTF-8"?> <Graphic version="1.0" xmlns="" xmlns: <Library> </Library> <Group id="Page_1" fw: <Group id="State_1" fw: <Group id="Layer_1" fw: <Rect x="0" y="0" width="200" height="50" blendMode="normal"> <fill> <LinearGradient x = "166" y = "61" scaleX = "47" rotation = "90"> <GradientEntry color="#ffffff" ratio="0" alpha="1"/> <GradientEntry color="#000000" ratio="1" alpha="1"/> </LinearGradient> </fill> </Rect> </Group> </Group> </Group> </Graphic> At the middle of the file, it's exactly the same as our MXML Graphics example, with just some interesting wrapper group tags and a header tag. Notice that the root <Graphic> tag has a namespace called ‘fw' that links to Fireworks. This is an important aspect that separates FXG from MXML Graphics. By enabling the ability to tag the source-generating tool we can start creating some interesting abilities, such as content transport. Now that we have briefly explored what MXML Graphics and FXG is, you might be wondering "How do I use FXG outside of a Flex application?" One of the coolest workflow that is featured in the Flash Catalyst beta is the ability to round-trip content from Flash Catalyst to Adobe Illustrator CS4 and then back to Flash Catalyst. To enable this ability, FXG is used as the transport protocol between the applications. Catalyst takes the selected content, which is represented in either MXML Graphics or FXG, converts it to FXG (if needed), and then sends the FXG content to Illustrator. Flash Catalyst marks up the FXG to define layers, states, etc., so that Illustrator can show this in its layers panel. Now that you have the content in Illustrator you can add layers, change content, move things around and then send it back to Catalyst. This is done automatically by taking your changes, converting them back to FXG and sending the FXG data to Flash Catalyst, which then takes the FXG and applies it to your project. It's very, very powerful and really nifty. All this FXG work is done behind the scene, so you more or less don't see it in action. In Illustrator you will see the FXG export dialog when you are sending your content back to Catalyst. This dialog allows you to tweak the default settings of the exported FXG if you desire. In 99.9% of the cases, the default is just fine but Adobe wants to make sure that you have as much control over the generated FXG data as possible. FXG isn't just for tooling; you can use it as an export file type directly inside Adobe Flash. Ryan Stewart made an excellent video example of using FXG directly inside a Flex 4 application. In the video he treats the FXG file as you would any other asset file. We recommend taking a few minutes to watch his demo to see some more possibilities with FXG. Right now, Adobe Photoshop CS4, Adobe Illustrator CS4, Adobe Fireworks CS4, Flash Catalyst, Flash Builder, and Flex 4 are the only tools that currently support FXG. This is a brand new format, one that we expect to see much broader and deeper support in the near future. In fact, the FXG specification was just updated from 1.0 to 1.1. As we get closer to the official launch of Flex 4, Flash Builder and Flash Catalyst you will hear more about FXG and its features. It's a great addition to Flash Platform and the Creative Suite tools, one that has the ability to help define and revolutionize how we work as a team when designing and developing our applications. For more information about what FXG and Flex 4 can do for you, I recommend that you watch the Flash Camp demo videos and the Flex 4 Component re-architecture videos on Adobe Labs..
http://www.adobe.com/inspire-archive/august2009/articles/article1/index.html?trackingid=EVHEZ
CC-MAIN-2014-52
refinedweb
1,264
59.23
In this article we'll see how to use DropDownList and ListBox Web controls to display data in various formats. Introduction This article is based on queries in the newsgroup: Query 1 : So data in the database along with the desired output is as given below Note: The dropDownList has a bug which prevent us from assigning the style property to each item in the DropDownList. So as stated by the MS Article the alternative method to achieve this is by using the tag given below: C# To solve this the concept remains same only thing we need to do is use the ListBox and navigate through each and every record in dataset to have item with appropriate color. So lets consider the Products Table in Northwind Database and expect the output as below: Tag that we use: Code given below: C# As we have done the coding for assigning style property to each item we might as well complete the code to develop a Color Picker in ASP.NET as below: So the control we use is And code for this is as below: We'll use the namespace System.Reflection to get the FieldInfo (i.e. Colors) The other way this can be done is by using Namespace System.ComponentModel and TypeDescripter Class. View All
https://www.c-sharpcorner.com/article/working-with-dropdownlist-and-listbox-controls-in-Asp-Net/
CC-MAIN-2019-30
refinedweb
217
53.14
24 November 2021 Alex Miller Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS) Welcome to a special mid-week Deref as we will be out in the US this week! But on that note, big thanks to the Clojure community for always be interesting, inventive, and caring. I’m thankful to be a part of it. Our big news this week is the release of Clojure 1.11.0-alpha3 which wraps up much of the work we’ve done in the last couple months. Probably the most interesting parts are the new things: CLJ-2667 Add functions to parse a single long/double/uuid/boolean from a string CLJ-2668 Add NaN? and infinite? predicates CLJ-2664 Add clojure.java.math namespace, wrappers for java.lang.Math clojure.java.math namespace If you have questions about these, I would request that you read the ticket first - we’re trying to get thinking and background into the ticket descriptions and it’s important context. We’ve already had a lot of feedback about clojure.java.math re cljs portability and higher-order use so probably more to come on that. If you want to discuss on Clojurians Slack, the #clojure-dev room is the best place. Docstring updates: CLJ-2666 Make Clojure Java API javadoc text match the example CLJ-1360 Update clojure.string/split docstring regarding trailing empty parts CLJ-2249 Clarify clojure.core/get docstring regarding sets, strings, arrays, ILookup CLJ-2488 Add definition to reify docstring Perf: Bug fix: And last but not least, we added support for optional trailing maps to kwarg functions in Clojure 1.11.0-alpha1 but had not yet worked through what this meant for spec. We’ve now released an update to spec.alpha (0.3.214) that is included as a dependency in this release. For the background on this, see CLJ-2606. Not to be be outshined, we also released an updated version of core.async 1.5.640, which has several important bug fixes, particularly if you are using any of the alt variants, or something that uses alt indirectly like mix or merge. Clojure as a Competitive Advantage - Simon El Nahas Thunks — a place to think through ideas still forming - Fogus Circular Programming in Clojure - Gary Verhaegen Deploying to clojars - Inge Solvoll Versions in the Time of Git Dependencies - Hugo Duncan Scraping web product data with Clojure - Duane Bester An Update on CIDER 1.2 - Bozhidar Batsov Making nREPL and CIDER More Dynamic (part 2) - Arne Brasseur New releases and tools this week: stub 0.1.1 - Library to generate stubs for other Clojure libraries sicmutils 0.20.0 - A port of the Scmutils computer algebra/mechanics system to Clojure tweet-def - Tweet as a dependency sweet-array 0.1.0 - Array manipulation library for Clojure with "sweet" array type notation and more safety by static types spec.alpha 0.3.214 - Describe the structure of data and functions cljs-macroexpand - clojurescript macroexpand-all macro with meta support secret-keeper 1.0.75 - A Clojure(Script) library for keeping your secrets under control datahike 0.4.0 - A durable Datalog implementation adaptable for distribution Tutkain 0.11.0 (alpha) - A Sublime Text package for interactive Clojure development unminify - unminifies JS stacktrace errors
https://clojure.org/news/2021/11/24/deref
CC-MAIN-2021-49
refinedweb
555
56.05
#include <UIOP_Acceptor.h> #include <UIOP_Acceptor.h> Inheritance diagram for TAO_UIOP_Acceptor: 0 Create Acceptor object using addr. [virtual] Destructor. Implements TAO_Acceptor. [private] Create a UIOP profile representing this acceptor. Add the endpoints on this acceptor to a shared profile. Obtains uiop properties that must be used by this acceptor, i.e., initializes <uiop_properties_>. Implement the common part of the open*() methods. Parse protocol specific options. Set the rendezvous point and verify that it is valid (e.g. wasn't truncated because it was too long). The concrete acceptor, as a pointer to its base class. Should we use GIOP lite?? ORB Core. Flag that determines whether or not the rendezvous point should be unlinked on close. This is really only used when an error occurs. The GIOP version for this endpoint.
http://www.theaceorb.com/1.4a/doxygen/tao/strategies/classTAO__UIOP__Acceptor.html
CC-MAIN-2017-51
refinedweb
130
62.44
17 April 2012 11:43 [Source: ICIS news] SINGAPORE (ICIS)--A small fire erupted at Formosa Petrochemical Corp’s (FPCC) residue desulphuriser (RDS) unit in ?xml:namespace> The fire broke out at about 04:10 hours Taiwan time (20:10 GMT) on Tuesday while FPCC is testing its 80,000 bbl/day No 2 RDS unit, which is due to restart later this month, according to a company source. The No 2 unit has been shut since 25 July 2010. “The fire occurred when FPCC attempted to restart the RDS this morning,” a source close to the company said. “The Formosa Group’s internal fire department put out the fire. There were no reports filed to the fire department as the RDS unit is not active, and the fire was considered to be a very small fire,” the source said. The fire was put out after about 20 minutes, according to the source. A company source said that a leakage of lubricant during a test run of the No 2 unit produced the smoke but there was no explosion. Testing the No 2 unit has been halted and it is unclear as to when the unit will restart as they will have to investigate the leak and again seek permission from the authorities to start up the unit. Meanwhile, another FPCC source said that the company’s three crackers at the Mailiao refinery site were not affected by the incident and are running at close to full capacity. A spokesperson from Nan Ya Plastics, a unit of the With additional reporting by Chow Bee Lin, Peh Soo Hwee, Quintella Koh
http://www.icis.com/Articles/2012/04/17/9550946/fire-breaks-out-at-formosas-mailiao-site-during-rds-unit-restart.html
CC-MAIN-2014-35
refinedweb
270
72.9
XmlDocument.PreserveWhitespace Property Gets or sets a value indicating whether to preserve white space in element content. Assembly: System.Xml (in System.Xml.dll) This property determines how white space is handled during the load and save process.. This method is a Microsoft extension to the Document Object Model (DOM). The following example shows how to strip white space from a file. using System; using System.IO; using System.Xml; public class Sample { public static void Main() { //Load XML data which includes white space, but ignore //any white space in the file. XmlDocument doc = new XmlDocument(); doc.PreserveWhitespace = false; doc.Load("book.xml"); //Save the document as is (no white space). Console.WriteLine("Display the modified XML..."); doc.PreserveWhitespace = true; doc.Save(Console.Out); } } The example uses the file book.xml.
https://msdn.microsoft.com/en-us/library/system.xml.xmldocument.preservewhitespace(v=vs.100).aspx
CC-MAIN-2017-47
refinedweb
131
54.79
<< C# Translator [using GOOGLE API] Started by Professor Green, Jul 14 2011 09:05 AMcombobox 13 replies to this topic #13 Posted 19 December 2012 - 01:03 PM Man that kinda stinks I was looking forward to making one of these. - 0 Recommended for you: Get network issues from WhatsUp Gold. Not end users. #14 Posted 25 January 2013 - 04:31 AM Hello this thread is a full tutorial about making a Translator using Google's API to build a C# Windows Forms Applications that uses your internet connection and translate your text from google What Do You Need ? 1- Google API , download it from the attachments -the API I used , I downloaded it from CodePlex since code.google is forbidden in my Country -- 2- .NET framework 3.5 WHY 3.5 ? if you are using .NET 4.0 you need to get a newer API than this I used ARE YOU USING Visual Studio 2010 ? so the Default is .NET 4.0 HOW TO MAKE IT 3.5 ? follow me in pics :: Go to Solution Explorer --> Right Click on the Project File --> Choose 3.5 client profile from the framework target LET'S START : **First Step** now open your Windows Forms project , Before Starting in Designing the form let's add the API , google API is a dll library adding it code be done from the Solution Explorer --> Right Click on References --> Add Reference by Clicking OK , you should take attention that the GOOGLE namespace is now available go to your FORM code and addusing Google.API.Translate; **Second Step** making the Form design : see my design and you can make any changes you want if you have any questions about ToolStrip menu or any GUI ask me ... but it's simple and easy **Third Step : CODING** first : let's fulfill the combo boxes with languages FROM and TO that google translator give us it using it's API DOUBLE CLICK on the combo box will take you to the event "Click" my combo box name is "combo_from" type this code :for (int i = 0; i < Google.API.Translate.Language.TranslatableCollection.Count(); i++) { combo_from.Items.Add(Google.API.Translate.Language.TranslatableCollection.ElementAt(i).ToString()); }and the same thing for the second combobox ... second : translate METHOD that the translate button will call itpublic string TranslateYourText(string text, string langFrom, string langTo) { string translated = ""; Console.WriteLine(text); try { TranslateClient client = new TranslateClient(""); Language lang1 = Language.English; Language lang2 = Language.Arabic; foreach (CultureInfo ci in CultureInfo.GetCultures(CultureTypes.NeutralCultures)) { if (ci.EnglishName == langFrom) { lang1 = ci.Name; } if (ci.EnglishName == langTo) { lang2 = ci.Name; } } if (Autodetect == true) { string from; translated = client.TranslateAndDetect(text, lang2.ToString(), out from); foreach (string options in Language.GetEnums()) { if (options == from) { CultureInfo ci = new CultureInfo(from); combo_from.Text = ci.EnglishName; } } } else { translated = client.Translate(text, lang1, lang2, TranslateFormat.Text); Console.WriteLine(translated); } return translated; } catch { MessageBox.Show(this, "Check Your Internet Connection","Try Again Please",MessageBoxButtons.OK,MessageBoxIcon.Error); return translated; } } now , you should know one thing , using (Display Parameter Info) and (Display Quick Info) is very important now since you don't know what's the behavior of Google Api about the TranslateYourText method : anyhow , this method is one and only method and your translation is readyanyhow , this method is one and only method and your translation is ready - 3 parameters , text you want to translate , lang. from and lang. to - Creating an instance of TranslateClient TranslateClient client = new TranslateClient(""); - Languages available written in English Double Click on the Translate button and ...paste thisstring translated = TranslateYourText(txt_from.Text, combo_from.Text, combo_to.Text); txt_to.Text = translated; if you have any questions , ask ..! if you have any updates , the project is available for all code callers and every body make an update please write his name in the about and thank you[ATTACH]4007[/ATTACH] thanks - 0 Also tagged with one or more of these keywords: combobox Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
http://forum.codecall.net/topic/64790-c-translator-using-google-api/page-2
CC-MAIN-2018-30
refinedweb
662
56.15
And then i am using Intellij IDE from Jetbrains. Download here I wrote my first Java program 2 days ago. I am doing this with the #100DaysOfCode on Twitter. To track my progress. What i have learnt so far. - Every Java program must have at least one function. - The functions don’t exist on their own. It must always belong to a class. - A class is a container for related functions. - Every Java program must have at least one Class that holds the main function - Then we need Access modifiers in Java. What are access modifiers? They help set boundaries to the scope of a class, constructor , variable, or method. For my first Java program. I will only be interested in the "Public" access modifier. package com.codewitedison; public class Main { public static void main(String[] args) { System.out.println("Welcome to my first Java Program); } } Intellij makes it really easy. So it came with the Java boilerplate that set the public class main and the public static void main lines. I only added the System.out.println and print message. System.out is a way to print results to the terminal. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/buzzedison/i-started-learning-java-3hme
CC-MAIN-2021-39
refinedweb
194
70.5
Advertise with Us! We have a variety of advertising options which would give your courses an instant visibility to a very large set of developers, designers and data scientists.View Plans Typescript vs Javascript Table of Contents If you have ever worked on a web development project, you must have seen what JavaScript is like. JavaScript has been there for a long time now as the most popular scripting language for many web projects. Typescript is an open-source programming language most suited for large applications. It was developed by Microsoft in 2012, mainly because JavaScript code was becoming too complex to handle when it comes to large-scale applications. Difference between Typescript and Javascript Essentially, all your JavaScript code is also valid in Typescript – that means Typescript is a superset of JavaScript – or JavaScript + more features = Typescript. So, if you save your JavaScript (.js) file with a Typescript (.ts) extension, it will work perfectly fine. But that does not mean that Typescript and JavaScript are the same. Before outlining the differences between both, let us understand what each language looks like! JavaScript JavaScript is one of the most popular core technologies of the web. From the beginning, it has been an integral part of web applications making web pages interactive and dynamic. It is a high-level language, with JIT (Just-in-Time) compiler and dynamic typing. For long, JS was a client-side implementation, but some newer JS engines also have server-side implementations. The syntax of JS is very similar to Java, and so are the standard libraries. As a starting point, JS is the best scripting language to learn. Check some nice tutorials here. JavaScript was developed by Netscape in collaboration with Sun Microsystems. Some unique features of JavaScript are – - Flexible, dynamic and cross-platform - used for both client-side and server-side - Lightweight interpreted - Supported by all browsers - Weakly typed - JIT compilation Let us take a simple example to illustrate how JavaScript works. The following simple HTML code is to validate a username field where myFunction() is written in JavaScript. If you observe, the syntax of the functions is similar to Java, however, we have defined the variables as ‘var’ and not declared them as any type. The myFunction() function is triggered when the user clicks on a ‘Submit’ button and gives appropriate alert messages as per the conditions. Here is sample output – It is that simple! If you know Java, JavaScript is fairly simple to learn. TypeScript TypeScript is no different from JavaScript in its purpose but is used for developing large applications. TypeScript trans compiles (source to source compilation) to JavaScript. It follows an object-oriented programming language structure and supports its features like classes, interfaces, namespaces, and inheritance. Static typing is possible in TypeScript through type annotations (numbers, string and boolean). For example, class Student { private name: string; } As we see above, TypeScript is strongly typed. This makes it better to debug (during compile time itself) which is a more efficient way to code for large projects. TypeScript program typically consists of modules, functions, variables, comments, expressions, and statements – just like any other full-fledged programming language. Some prominent features of TypeScript are – - Easy to maintain and enhances project productivity - Static typing and annotations are possible - Supports object-oriented features like interface, inheritance, and classes - Easy to debug and early detection of errors - Supports ES6 (ECMAScript) that offers easier syntax to handle objects and inheritance features - The good full-fledged IDE support Does that make TypeScript better than JavaScript? Before further comparison of TypeScript and JavaScript, another important question needs to be addressed! Since TypeScript is a superset of JavaScript, should we always use Typescript? Does being a superset makes TypeScript suitable for all types of projects? No. TypeScript is no way to replace or make JavaScript obsolete. JavaScript is still the most favorite client-side scripting language. For smaller projects, using TypeScript could be an overhead because it takes time to trans compile the code into JavaScript, which is an extra step. JavaScript is directly run on the browser, so for small code chunks, it's easier to refresh and debug the code. In the case of TypeScript, we need a proper IDE and set up to run the code. When should you migrate your project to TypeScript? When the code becomes huge, complex to handle, and more prone to errors, it is better if some errors are caught during compile time itself. That is where TypeScript helps. The beauty is that the entire codebase written in Java can be reused as such. Typescript vs Javascript: Head to head comparison Now as we understand the basic features and purpose of both, let us explore some more low-level differences apart from what we have already learned so far – Conclusion As we have already determined, JavaScript is most suited when your team is new and is working on small web projects. If you have a team with good expertise and knowledge and want them to handle a complex project, going for TypeScript is a perfect choice. That said, if you are looking for a learning curve and job opportunities, definitely TypeScript has an edge over JavaScript. TypeScript developers are paid an average salary of $148,000 per year whereas JS developers are typically paid around $110,000 per year. If you learn TypeScript, you can work on both JS as well as TypeScript projects. Start learning TypeScript here. People are also reading: - Best Javascript Courses - JavaScript Certification - Best Javascript Books - JavaScript Interview Questions - Best Javascript Frameworks - Best JavaScript IDE - Javascript Library - Difference between Java vs JavaScript - What is JavaScript Map Array Function? - Difference between PHP and JavaScript - Difference between Javascript and Python When should you not use TypeScript? How do I run a TypeScript file? Can we use alert () in TypeScript? How do I run TypeScript in the browser? How do I practice TypeScript? Can you mix TypeScript and JavaScript? Is TypeScript slower than JavaScript? Should I use TypeScript or es6? Is TypeScript a transpiler? Does TypeScript improve performance?
https://hackr.io/blog/typescript-vs-javascript
CC-MAIN-2020-40
refinedweb
1,007
64
#include <cyg/io/framebuf.h> typedef struct cyg_fb { cyg_ucount16 fb_depth; cyg_ucount16 fb_format; cyg_uint32 fb_flags0; … } cyg_fb; extern const cyg_uint8 cyg_fb_palette_ega[16 * 3]; extern const cyg_uint8 cyg_fb_palette_vga[256 * 3]; #define CYG_FB_DEFAULT_PALETTE_BLACK 0x00 #define CYG_FB_DEFAULT_PALETTE_BLUE 0x01 #define CYG_FB_DEFAULT_PALETTE_GREEN 0x02 #define CYG_FB_DEFAULT_PALETTE_CYAN 0x03 #define CYG_FB_DEFAULT_PALETTE_RED 0x04 #define CYG_FB_DEFAULT_PALETTE_MAGENTA 0x05 #define CYG_FB_DEFAULT_PALETTE_BROWN 0x06 #define CYG_FB_DEFAULT_PALETTE_LIGHTGREY 0x07 #define CYG_FB_DEFAULT_PALETTE_LIGHTGRAY 0x07 #define CYG_FB_DEFAULT_PALETTE_DARKGREY 0x08 #define CYG_FB_DEFAULT_PALETTE_DARKGRAY 0x08 #define CYG_FB_DEFAULT_PALETTE_LIGHTBLUE 0x09 #define CYG_FB_DEFAULT_PALETTE_LIGHTGREEN 0x0A #define CYG_FB_DEFAULT_PALETTE_LIGHTCYAN 0x0B #define CYG_FB_DEFAULT_PALETTE_LIGHTRED 0x0C #define CYG_FB_DEFAULT_PALETTE_LIGHTMAGENTA 0x0D #define CYG_FB_DEFAULT_PALETTE_YELLOW 0x0E #define CYG_FB_DEFAULT_PALETTE_WHITE 0x0F cyg_ucount16 CYG_FB_FORMAT(framebuf); cyg_ucount16 CYG_FB_FORMAT void cyg_fb_read_palette(cyg_fb* fb, cyg_ucount32 first, cyg_ucount32 count, void* data); void cyg_fb_read_palette void cyg_fb_write_palette(cyg_fb* fb, cyg_ucount32 first, cyg_ucount32 count, const void* data, cyg_ucount16 when); void cyg_fb_write_palette cyg_fb_colour cyg_fb_make_colour(cyg_fb* fb, cyg_ucount8 r, cyg_ucount8 g, cyg_ucount8 b); cyg_fb_colour cyg_fb_make_colour void cyg_fb_break_colour(cyg_fb* fb, cyg_fb_colour colour, cyg_ucount8* r, cyg_ucount8* g, cyg_ucount8* b); void cyg_fb_break_colour void CYG_FB_READ_PALETTE(FRAMEBUF, cyg_ucount32 first, cyg_ucount32 count, void* data); void CYG_FB_READ_PALETTE void CYG_FB_WRITE_PALETTE(FRAMEBUF, cyg_ucount32 first, cyg_ucount32 count, const void* data, cyg_ucount16 when); void CYG_FB_WRITE_PALETTE cyg_fb_colour CYG_FB_MAKE_COLOUR(FRAMEBUF, cyg_ucount8 r, cyg_ucount8 g, cyg_ucount8 b); cyg_fb_colour CYG_FB_MAKE_COLOUR void CYG_FB_BREAK_COLOUR(FRAMEBUF, cyg_fb_colour colour, cyg_ucount8* r, cyg_ucount8* g, cyg_ucount8* b); void CYG_FB_BREAK_COLOUR Managing colours can be one of the most difficult aspects of writing graphics code, especially if that code is intended to be portable to many different platforms. Displays can vary from 1bpp monochrome, via 2bpp and 4bpp greyscale, through 4bpp and 8bpp paletted, and up to 16bpp and 32bpp true colour - and those are just the more common scenarios. The various drawing primitives like cyg_fb_write_pixel work in terms of cyg_fb_colour values, usually an unsigned integer. Exactly how the hardware interprets a cyg_fb_colour depends on the format. cyg_fb_write_pixel There are a number of ways of finding out how these values will be interpreted by the hardware: The CYG_FB_FLAGS0_TRUE_COLOUR flag is set for all true colour displays. The format parameter can be examined for more details but this is not usually necessary. Instead code can use cyg_fb_make_colour or CYG_FB_MAKE_COLOUR to construct a cyg_fb_colour value from red, green and blue components. cyg_fb_make_colour CYG_FB_MAKE_COLOUR If the CYG_FB_FLAGS0_WRITEABLE_PALETTE flag is set then a cyg_fb_colour value is an index into a lookup table known as the palette, and this table contains red, green and blue components. The size of the palette is determined by the display depth, so 16 entries for a 4bpp display and 256 entries for an 8bpp display. Application code or a graphics library can install its own palette so can control exactly what colour each cyg_fb_colour value corresponds to. Alternatively there is support for installing a default palette. If CYG_FB_FLAGS0_PALETTE is set but CYG_FB_FLAGS0_WRITEABLE_PALETTE is clear then the hardware uses a fixed palette. There is no easy way for portable software to handle this case. The palette can be read at run-time, allowing the application's desired colours to be mapped to whichever palette entry provides the best match. However normally it will be necessary to write code specifically for the fixed palette. Otherwise the display is monochrome or greyscale, depending on the depth. There are still variations, for example on a monochrome display colour 0 can be either white or black. As an alternative or to provide additional information, the exact colour format is provided by the fb_format field of the cyg_fb structure or by the CYG_FB_FORMAT macro. It can be one of the following (more entries may be added in future): fb_format cyg_fb CYG_FB_FORMAT simple 1bpp monochrome display, with 0 as black or the darker of the two colours, and 1 as white or the ligher colour. simple 1bpp monochrome display, with 0 as white or the lighter of the two colours, and 1 as black or the darker colour. a 1bpp display which cannot easily be described as monochrome. This is unusual and not readily supported by portable code. It can happen if the framebuffer normally runs at a higher depth, for example 4bpp or 8bpp paletted, but is run at only 1bpp to save memory. Hence only two of the palette entries are used, but can be set to arbitrary colours. The palette may be read-only or read-write. a 2bpp display offering four shades of grey, with 0 as black or the darkest of the four shades, and 3 as white or the lightest. a 2bpp display offering four shades of grey, with 0 as white or the lightest of the four shades, and 3 as black or the darkest. a 2bpp display which cannot easily be described as greyscale, for example providing black, red, blue and white as the four colours. This is unusual and not readily supported by portable code. It can happen if the framebuffer normally runs at a higher depth, for example 4bpp or 8bpp paletted, but is run at only 2bpp to save memory. Hence only four of the palette entries are used, but can be set to arbitrary colours. The palette may be read-only or read-write. a 4bpp display offering sixteen shades of grey, with 0 as black or the darkest of the 16 shades, and 15 as white or the lighest. a 4bpp display offering sixteen shades of grey, with 0 as white or the lightest of the 16 shades, and 15 as black or the darkest. a 4bpp paletted display, allowing for 16 different colours on screen at the same time. The palette may be read-only or read-write. an 8bpp paletted display, allowing for 256 different colours on screen at the same time. The palette may be read-only or read-write. an 8bpp true colour display, with three bits (eight levels) of red and green intensity and two bits (four levels) of blue intensity. a 16bpp true colour display with 5 bits each for red and blue and 6 bits for green. a 16bpp true colour display with five bits each for red, green and blue, and one unused bit. a 32bpp true colour display with eight bits each for red, green and blue and eight bits unused. For the true colour formats the format does not define exactly which bits in the pixel are used for which colour. Instead the cyg_fb_make_colour and cyg_fb_break_colour functions or the equivalent macros should be used to construct or decompose pixel values. cyg_fb_break_colour Palettes are the common way of implementing low-end colour displays. There are two variants. A read-only palette provides a fixed set of colours and it is up to application code to use these colours appropriately. A read-write palette allows the application to select its own set of colours. Displays providing a read-write palette will have the CYG_FB_FLAGS0_WRITEABLE_PALETTE flag set in addition to CYG_FB_FLAGS0_PALETTE. Even if application code can install its own palette, many applications do not exploit this functionality and instead stick with a default. There are two standard palettes: the 16-entry PC EGA for 4bpp displays; and the 256-entry PC VGA, a superset of the EGA one, for 8bpp displays. This package provides the data for both, in the form of arrays cyg_fb_palette_ega and cyg_fb_palette_vga, and 16 #define's such as CYG_FB_DEFAULT_PALETTE_BLACK for the EGA colours and the first 16 VGA colours. By default device drivers for read-write paletted displays will install the appropriate default palette, but this can be suppressed using configuration option CYGFUN_IO_FRAMEBUF_INSTALL_DEFAULT_PALETTE. If a custom palette will be used then installing the default palette involves wasting 48 or 768 bytes of memory. It should be emphasized that displays vary widely. A colour such as CYG_FB_DEFAULT_PALETTE_YELLOW may appear rather differently on two different displays, although it should always be recognizable as yellow. Developers may wish to fine-tune the palette for specific hardware. The current palette can be retrieved using cyg_fb_read_palette or CYG_FB_READ_PALETTE. The first and count arguments control which palette entries should be retrieved. For example, to retrieve just palette entry 12 first should be set to 12 and count should be set to 1. To retrieve all 256 entries for an 8bpp display, first should be set to 0 and count should be set to 256. The data argument should point at an array of bytes, allowing three bytes for every entry. Byte 0 will contain the red intensity for the first entry, byte 1 green and byte 2 blue. cyg_fb_read_palette CYG_FB_READ_PALETTE For read-write palettes the palette can be updated using cyg_fb_write_palette or CYG_FB_WRITE_PALETTE. The first and count arguments are the same as for cyg_fb_read_palette, and the data argument should point at a suitable byte array packed in the same way. The when argument should be one of CYG_FB_UPDATE_NOW or CYG_FB_UPDATE_VERTICAL_RETRACE. With some displays updating the palette in the middle of an update may result in visual noise, so synchronizing to the vertical retrace avoids this. However not all device drivers will support this. cyg_fb_write_palette CYG_FB_WRITE_PALETTE There is an assumption that palette entries use 8 bits for each of the red, green and blue colour intensities. This is not always the case, but the device drivers will perform appropriate adjustments. Some hardware may use only 6 bits per colour, and the device driver will ignore the bottom two bits of the supplied intensity values. Occasionally hardware may use more than 8 bits, in which case the supplied 8 bits are shifted left appropriately and zero-padded. Device drivers for such hardware may also provide device-specific routines to manipulate the palette in a non-portable fashion. True colour displays are often easier to manage than paletted displays. However this comes at the cost of extra memory. A 16bpp true colour display requires twice as much memory as an 8bpp paletted display, yet can offer only 32 or 64 levels of intensity for each colour as opposed to the 256 levels provided by a palette. It also requires twice as much video memory bandwidth to send all the pixel data to the display for every refresh, which may impact the performance of the rest of the system. A 32bpp true colour display offers the same colour intensities but requires four times the memory and four times the bandwidth. Exactly how the colour bits are organized in a cyg_fb_colour pixel value is not defined by the colour format. Instead code should use the cyg_fb_make_colour or CYG_FB_MAKE_COLOUR primitives. These take 8-bit intensity levels for red, green and blue, and return the corresponding cyg_fb_colour. When using the macro interface the arithmetic happens at compile-time, for example: #define BLACK CYG_FB_MAKE_COLOUR(FRAMEBUF, 0, 0, 0) #define WHITE CYG_FB_MAKE_COLOUR(FRAMEBUF, 255, 255, 255) #define RED CYG_FB_MAKE_COLOUR(FRAMEBUF, 255, 0, 0) #define GREEN CYG_FB_MAKE_COLOUR(FRAMEBUF, 0, 255, 0) #define BLUE CYG_FB_MAKE_COLOUR(FRAMEBUF, 0, 0, 255) #define YELLOW CYG_FB_MAKE_COLOUR(FRAMEBUF, 255, 255, 80) Displays vary widely so the numbers may need to be adjusted to give the exact desired colours. For symmetry there are also cyg_fb_break_colour and CYG_FB_BREAK_COLOUR primitives. These take a cyg_fb_colour value and decompose it into its red, green and blue components. CYG_FB_BREAK_COLOUR
http://www.ecoscentric.com/ecospro/doc/html/ref/framebuf-colour.html
crawl-003
refinedweb
1,806
50.16
[HELP] Update View During Button Action def changeBackground(sender): z=sender.superview x=1 while x<=10: button='button'+str(x) z[button].background_color=(0,0,0) x+=1 time.sleep(1) In my mind, this code (assuming that this gets called by pressing a button) should change the background color of 'button1', 'button2', and so on in order. Changing one every second. However, if this code was ran the program would wait for 10 seconds and then update all of the buttons' background colors at once. How can I get it to update the UI at the time when that line of code is ran? I have tried z[button].set_needs_display()to no avail. Thanks! Use the @ui.in_backgrounddecorator so it doesn't run in the main UI thread (in your code, time.sleepblocks the entire UI from updating for ten seconds). The uimodule has some odd threading habits. In particular all functions called by the UI (button actions, delegate methods, etc.) are blocking by default, which means that the entire UI will not update until the function returns. As a workaround you can use the ui.in_backgroundfunction decorator, which will cause the function to run in a different thread than the one that does UI updates. Decorators are used like this: @ui.in_background def changeBackground(sender): .... A short (unrelated) suggestion regarding your code - Python provides a very easy way to iterate over containers using a forloop, without the need for an integer iteration counter. The function you posted could also be written like this: @ui.in_background def changeBackground(sender): for button in sender.superview: button.backgound_color = (0, 0, 0) time.sleep(1) In cases where you need to iterate over a range of numbers, use the rangefunction: for i in range(10): # creates an iterable object for the numbers from 0 (inclusive) to 10 (exclusive) print i Or if you need to iterate over a container, but also need to know each element's index, there's enumerate: for i, word in ["These", "are", "words."]: print "Word", i, "=", word In Python 2, you should almost always choose to use xrange()instead of range()for performance and memory management reasons... import sys print(sys.getsizeof(range(10))) # 72 print(sys.getsizeof(xrange(10))) # 20 print(sys.getsizeof(range(1000000))) # 4000032 that is 4MB of RAM instead of 20 bytes print(sys.getsizeof(xrange(1000000))) # 20 Thank you all for the help! I once thought I was fairly fluent in Python, however I now realize how little I know. EDIT: Another quick thing. Let's say I make a game on Pythonista, is it possible to make some sort of "savegame" file? That way You could save leaderboards and/or progress? I'm no expert on this (still learning python myself), but there are countless different ways to save data, be it a database, JSON, shelve, and others. Which one is best depends on your application. Saving a leader board is simple enough that you could do it with with a simple shelve style database. Simply storing the last level a person was on would also be simple. If you need to save exactly what's going on in a game that has lots of independent characters that all have logic and history associated with them you'll probably want something more sophisticated. @TQA, check our the high-scoresmodule in the Gamessection of Pythonista-Tools. It does a lot of what you are looking for. Personally I prefer JSON for persistent data storage with Python. The types it has are very similar to Python's basic data types, so if you can represent all your data in some combination of dicts, lists, strs, floats, ints and bools you can easily dump and load that data using the jsonmodule. A database is probably more complex than necessary for something as simple as a score leaderboard, but if you need to store large data sets a database may be a better choice than JSON. If you need to store the state of multiple Python objects, then pickleor shelveare probably best. I'm not sure what exactly the limits of those modules are, certain objects can or cannot be pickled/shelved depending on how they are laid out. @dgelessus, I completely agree with your recommendation of jsonas the preferred method of storing Python objects into files. It is also super helpful that modules like Requestshave built-in support for json and that the resulting files are very human-readable and human-editable. When @techteej was working on the high scores module that I mentioned above, he made the design choice to use pickle instead of json because it was too easy to cheat by opening up high_scores.json in the Pythonista Editor and increase your high score. The use of pickle (slightly) increases the difficulty of cheating in this way. I just want to save a single integer along with the date/time. I feel pretty stupid. I have no idea how this works as I have never done anything with reading/writing files before. How do I create a file to be written to? with open('myfile.txt','w') as f: f.write('something') import datetime, json, random filename = 'my_file.json' data_a = [random.randint(1, 100), unicode(datetime.datetime.now())] print(data_a) with open(filename, 'w') as out_file: json.dump(data_a, out_file) with open(filename) as in_file: data_z = json.load(in_file) print(data_z) assert data_a == data_z, 'Something went wrong: {} != {}'.format(data_a, data_z) I seem to have run into a problem.. I write import os import os.path with open('save.txt','w') as f: f.write('hello') with open('save.txt','r') as f: f.read() Nothing happens. I don't get any errors, but the console output window doesn't even open. Try changing the last line to: print(f.read()).
https://forum.omz-software.com/topic/1437/help-update-view-during-button-action
CC-MAIN-2021-31
refinedweb
970
67.55
nathan.f77 (Nathan Broadbent) - Registered on: 10/28/2012 - Last connection: 07/15/2014 Issues Activity 07/16/2014 - 08:54 PM Ruby trunk Feature #10040: `%d` and `%t` shorthands for `Date.new(*args)` and `Time.new(*args)` - > Nathan, Date is only available in stdlib (maybe it should be moved to core) so I don't think those literals will wo... - 06:09 AM Ruby trunk Feature #10040: `%d` and `%t` shorthands for `Date.new(*args)` and `Time.new(*args)` - Nobuyoshi Nakada wrote: > I don't think there is possibilities to introduce `%d` and `%i`, especially using variable... 07/15/2014 - 08:30 PM Ruby trunk Feature #10040 (Feedback): `%d` and `%t` shorthands for `Date.new(*args)` and `Time.new(*args)` - I'm working on a Ruby application where we have to deal with a lot of dates and times, especially in our test suite. ... 01/25/2013 - 03:53 AM Ruby trunk Feature #7739: Define Hash#| as Hash#reverse_merge in Rails - I would personally love a more concise way to merge/reverse_merge hashes. Would you also propose Hash#& as merge? ... 11/20/2012 - 04:29 AM Ruby trunk Feature #7388: Object#embed - > > I'd even say that `embed` is wrong. > > I would like to know of a good example of use case. I often succum... 11/18/2012 - 11:55 AM Ruby trunk Feature #5478: import Set into core, add syntax - I really like `~[1, 2, 3]` as a shortcut for `Set.new([1, 2, 3])`: class Array def ~@ Set.new self ... 11/14/2012 - 04:53 AM Ruby trunk Feature #7341: Enumerable#associate - > > 1) The form you suggest would be redundant with `Enumerable#to_h` > I agree that 'Enumerable#to_h' woul... - 04:23 AM Ruby trunk Feature #7346: object(...) as syntax sugar for object.call(...) - @rosenfeld, I'll just mention that you can use Proc#[] in your example: double = -> n { n * 2 } double[3] ... 11/13/2012 - 08:29 AM Ruby trunk Feature #7341 (Open): Enumerable#associate - Jeremy Kemper proposed Enumerable#associate during the discussion in #7297, with the following details: ----------... - 08:17 AM Ruby trunk Feature #7340 (Open): 'each_with' or 'into' alias for 'each_with_object' - Following on from the discussions at #7297 and #7241, it is apparent that a shorter alias for 'each_with_object' woul...
https://bugs.ruby-lang.org/users/6274
CC-MAIN-2018-13
refinedweb
375
65.32
I recently got to work outside my comfort zone, rather than scripting in D I was to utilize Python. It has been a long time since programming in Python, I've developed a style and Python does not follow C syntactic choices. Thus I had to search on how to solve problems I already know in D. I thought it would make for an opportunity to answer those types of questions for D. My need to specify a boolean value seemed like a good place to start. ```dlang bool variable = true;// false ``` D utilize lower case true/false. It will also treat 0 or null as false. ```dlang string str; if(str) // false, str is null ``` Strings are special in that null or empty likely need similar logic paths. In D the following works with all arrays (string is an array) ```dlang import std.range; string str; if(str.empty) // null or "" ``` D has custom types with operator overloading, so such a post is not complete without mentioning it. Classes can't override boolean and is only checking if the reference is null. However struct being a value type allow for changing behavior with `opCast` ```dlang if (e) => if (e.opCast!(bool)) if (!e) => if (!e.opCast!(bool)) ``` D's operator overloading relies on its powerful template system. That is out of scope for this article.
https://he-the-great.livejournal.com/62199.html
CC-MAIN-2020-05
refinedweb
226
73.88
using circle detection and system works for a bit then disconnects itself and then end by opens its internal flash drive with blue led flashing please advise Tom Find Circles Example This example shows off how to find circles in the image using the Hough Transform. Circle Hough Transform - Wikipedia Note that the find_circles() method will only find circles which are completely inside of the image. Circles which go outside of the image/roi are ignored… import sensor, image, time,pyb sensor.reset() sensor.set_pixformat(sensor.RGB565) # grayscale is faster sensor.set_framesize(sensor.QQVGA) sensor.skip_frames(time = 2000) clock = time.clock() ledRed =pyb.LED(1) while(True): clock.tick() img = sensor.snapshot().lens_corr(1.8) Circle objects have four values: x, y, r (radius), and magnitude. The magnitude is the strength of the detection of the circle. Higher is better… threshold controls how many circles are found. Increase its value to decrease the number of circles detected… x_margin, y_margin, and r_margin control the merging of similar circles in the x, y, and r (radius) directions. r_min, r_max, and r_step control what radiuses of circles are tested. Shrinking the number of tested circle radiuses yields a big performance boost. for c in img.find_circles(threshold = 2000, x_margin = 10, y_margin = 10, r_margin = 10, r_min = 2, r_max = 100, r_step = 2): img.draw_circle(c.x(), c.y(), c.r(), color = (255, 0, 0)) ledRed.on() print(“Circle Detected”); print© pyb.delay(500) ledRed.off() print(“FPS %f” % clock.fps()) print ("*")
https://forums.openmv.io/t/openmv-camera-works-then-quits/1791
CC-MAIN-2021-17
refinedweb
246
51.65
C; implementations of C must conform to the standard. An “implementation” is basically the combination of the compiler, and the hardware platform. As you will see, the C standard usually does not specify exact sizes for the native types, preferring to set a basic minimum that all conforming implementations of C must meet. Implementations are free to go above these minimum requirements. This means that different implementations can have different sizes for the same type. For example a C compiler from the fictitious company ABC running on a 64-bit Intel processor might define an int to be 64 bits, while another C compiler from the fictitious XYZ organization for a 16-bit TI embedded processor might define an int to be 16 bits. They both meet the minimum requirements so both are valid. The next sections look at C types in detail, starting with the smallest. Boolean type The _Bool keyword denotes a value that can hold either zero or one. If another numeric or pointer type is assigned to a boolean the value stored is 0 if the numeric or pointer value evaluates to 0, otherwise it is one. Let’s see an example: #include <stdio.h> void main() { int a = 3944; long b = -199020930; double c = 7.534e-10; double * d = &c; _Bool ba = a; _Bool bb = b; _Bool bc = c; _Bool bd = d; _Bool be = ( 1 == 2 ); printf( "ba = %d\n", ba ); printf( "bb = %d\n", bb ); printf( "bc = %d\n", bc ); printf( "bd = %d\n", bd ); printf( "be = %d\n", be ); } The output of this program is: ba = 1 bb = 1 bc = 1 bd = 1 be = 0 As you can see, any value that is not 0 is considered 1. On line 14 be is assigned to the value of the equality expression. Recall from the operators tutorial that relational operators return an int with the value 0 if false or 1 if true. Integer types C provides several standard integer types, from small magnitude to large magnitude numbers: char, short int, int, long int, long long int. Each type can be signed or unsigned. Signed types can represent positive and negative numbers while unsigned can represent zero and positive numbers. As mentioned above, the standard does not specify exact sizes for these types, and the only says that a larger type’s range shall be at least as big as the smaller type. For example the range of an int variable is at least as big as the range of a short int. C defines a minimum set of characters that need to be supported on a system where C programs are written and run (more on this in the next tutorial). Each character is given a numeric value such as A = 65, B = 66, etc. A char is an integer number big enough to at least hold the maximum value in this set, though in typical implementations it is bigger. C does not specify whether a plain ‘char’ refers to a signed or unsigned number – that is implementation defined. If you need to be sure your char variable is signed or unsigned, specify it in the declaration like this: unsigned char c. The short int is a signed small integer. C allows abbreviated or longer names for the same type: short, signed short, signed short int. For the unsigned version use one of these: unsigned short, unsigned short int. A signed integer may be declared as: int, signed, signed int. An unsigned integer may be declared as: unsigned, unsigned int. A signed long int may be declared using one of these names: long, signed long, ,long int, signed long int. An unsigned version may be declared as: unsigned long, unsigned long int. A signed long long can be declared like this: long long, singed long long, long long int, signed long long int. An unsigned can be declared as: unsigned long long, unsigned long long int. This table defines the minimum ranges allowed for these integer types. An implementation is free to define these types to hold greater ranges than the ones given here. If you want to know the actual minimum and maximum values an implementation uses they are defined in the limits.h header file. It defines various constants like SCHAR_MIN, SCHAR_MAX, etc. that give the values for this implementation of C. The following example shows the limits of this implementation: #include <stdio.h> #include <limits.h> void main() { printf("Signed char minimum value: %d\n", SCHAR_MIN ); printf("Signed char maximum value: %d\n", SCHAR_MAX ); printf("Unsigned char minimum value: %d\n", 0 ); printf("Unsigned char maximum value: %d\n", UCHAR_MAX ); printf("Char minimum value: %d\n", CHAR_MIN ); printf("Char maximum value: %d\n", CHAR_MAX ); printf("Signed short minimum value: %d\n", SHRT_MIN ); printf("Signed short maximum value: %d\n", SHRT_MAX ); printf("Unsigned short minimum value: %d\n", 0 ); printf("Unsigned short maximum value: %d\n", USHRT_MAX ); printf("Signed int minimum value: %d\n", INT_MIN ); printf("Signed int maximum value: %d\n", INT_MAX ); printf("Unsigned int minimum value: %u\n", 0 ); printf("Unsigned int maximum value: %u\n", UINT_MAX ); printf("Signed long minimum value: %ld\n", LONG_MIN ); printf("Signed long maximum value: %ld\n", LONG_MAX ); printf("Unsigned long minimum value: %lu\n", 0 ); printf("Unsigned long maximum value: %lu\n", ULONG_MAX ); printf("Signed long long minimum value: %lld\n", LLONG_MIN ); printf("Signed long long maximum value: %lld\n", LLONG_MAX ); printf("Unsigned long long minimum value: %lu\n", 0 ); printf("Unsigned long long maximum value: %llu\n", ULLONG_MAX ); } The output of this program is : Signed char minimum value: -128 Signed char maximum value: 127 Unsigned char minimum value: 0 Unsigned char maximum value: 255 Char minimum value: -128 Char maximum value: 127 Signed short minimum value: -32768 Signed short maximum value: 32767 Unsigned short minimum value: 0 Unsigned short maximum value: 65535 Signed int minimum value: -2147483648 Signed int maximum value: 2147483647 Unsigned int minimum value: 0 Unsigned int maximum value: 4294967295 Signed long minimum value: -2147483648 Signed long maximum value: 2147483647 Unsigned long minimum value: 0 Unsigned long maximum value: 4294967295 Signed long long minimum value: -9223372036854775808 Signed long long maximum value: 9223372036854775807 Unsigned long long minimum value: 0 Unsigned long long maximum value: 18446744073709551615 As you can see this implementation keeps close to the minimums allowed except for ints. The output lines 5-6 for char types also shows that in this implementation chars are signed (otherwise the minimum and maximum values would be the same as for unsigned char). An enumerated type is an integer type that allows you to give names to certain values. Here is an example of an enumeration: enum language { english, french, spanish = 4, german, chinese }; This declares language as an integral type that can have one of 5 values. Each of the members of the enumeration are assigned an integer value. If a value is given, as for the spanish member above, that value is used; otherwise the value is 1 more than the previous member’s value. For the first member if a value is not given it is assigned 0. Following these rules we can see that the values for the members would be english = 0, french = 1, spanish = 4, german = 5, chinese = 6. Here is an example of how to use a variable of this enumerated type: enum language lang = spanish; if ( lang == english ) printf( "I speak english\n" ); else if ( lang == spanish ) printf( "Yo hablo espanol\n" ); else printf( "I don't speak that language\n" ); This declares lang as having the enumeration type language and assigns it the spanish member of the enumeration. Then it compares lang with the english and spanish members of the enumeration to print the appropriate message. Since the members of an enumerated type have integer values you can compare them to other numbers. But generally you want to avoid comparing them to hardcoded values like if ( lang == 5 ) to check for the german value because any change in the members of the type can possibly change their values. Suppose the language enumeration was changed to add a new russian value after the spanish value and before the german value. Now russian has the value 5, not german, so the statement if ( lang == 5 ) will be true for the wrong language. The C standard does not define specific sizes for the integer types discussed so far. However, it does require the the implementation to define integer types that have at least a certain number of bits. It also allows an implementation to define integer types that have a specific size in bits. These types are defined in the stdint.h header file. The types int_leastN_t and uint_leastN_t define integer types that are at least N bits wide. For example int_least8_t defines a signed integer that is at least 8 bits, and a uint_least64_t defines an unsigned integer that is at least 64 bits. In addition, an implementation might provide intN_t and uintN_t types that a exactly N bits wide. l_eight is 1 bytes thirtytwo is 4 bytes This shows that the int_least8_t type is 8 bits (1 byte) and thirtytwo is 32 bits (4 bytes). In addition to all the types discussed above, an implementation may define its own integer types. Consult the implementation’s documentation for details about these types. Floating point types Floating point numbers are real numbers that, unlike integers, may contain fractional parts of numbers, like 1.446, -112.972, 3.267e+27. There are three standard floating point types: float, double and long double. Just like integers, their sizes are not specified by the standard, it just sets the minimum allowed ranges of the magnitudes and precision for these types. This table shows the minimum and maximum magnitudes and digits of precision these types must have (in base 10): The magnitude values in this table are absolute values. A floating point number may be positive or negative. This table says that an implementation’s float, double and long double types must be able to represent a value at least 1.0e-37 small and at least 1.0e+37 big. There is a floating point counterpart to limits.h called float.h that defines constants that contain the implementation’s minimum and maximum magnitudes and precision for floating point types: #include <stdio.h> #include <float.h> void main() { printf("Minimum value of a float: %.5g\n", FLT_MIN ); printf("Maximum value of a float: %.5g\n", FLT_MAX ); printf("Precision of a float: %d digits\n", FLT_DIG ); printf("Minimum value of a double: %.5g\n", DBL_MIN ); printf("Maximum value of a double: %.5g\n", DBL_MAX ); printf("Precision of a double: %d digits\n", DBL_DIG ); printf("Minimum value of a long double: %.5Lg\n", LDBL_MIN ); printf("Maximum value of a long double: %.5Lg\n", LDBL_MAX ); printf("Precision of a long double: %d digits\n", LDBL_DIG ); } You can see that this implementation’s double and long double types greatly exceed the minimum required. Most implementations will also provide a type for handling complex numbers, which can be combined with the above floating types depending on how much magnitude and precision you want. A complex type may be declared as: float _Complex, double _Complex or long double _Complex. Here is a simple example adding two complex numbers: Result is 6.9 + -5I Line 5 declares a complex that is comprised of float real and float imaginary parts. Lines 6 and 7 declare complex numbers with doubles for the real and imaginary parts, and line 9 adds them together. As you learned in the operators and expressions tutorials, the operands of the addition operator may be converted to a common type – in this case the float _Complex is converted to a double _Complex and the result is a double _Complex. The functions creal() and cimag() used on line 11 return the real and imaginary parts of a complex number. The header file complex.h has many math functions that work with complex numbers. Some implementations may provide an imaginary type as well, but this is not required by the C standard. Consult your implementation’s documentation for more info about functions on complex numbers and support for imaginary types. Void type The void type basically means “nothing”. A void type cannot hold any values. You cannot declare a variable to be a void, but you can declare a variable to be a pointer to a void. A pointer to a void is used when the pointer may point to various different types. To use the value that a void pointer is pointing to, it must be cast into another type. In the previous tutorial on expressions you saw an example of this in the section on explicit type conversion. You can also declare a function’s return type as void to indicate that the function does not return any value. You have seen this already in several examples where the main() function is declared to return void. Here is another example: The output of this program is: a + b = 18 c + d = 18 This program has two functions func1() and func2() that add two integers. The function func1() returns the sum of its arguments. The function func2() adds the arguments and prints out the result; it does not return a result. The single void inside parentheses can also be used in function declarations to indicate that a function takes no parameters. For example, if you have two function declarations: extern int func1(); extern int func2(void); The first declares func1() as a function that takes an unspecified number and types of parameters and returns an integer. The second declares func2() as a function that takes no parameters and returns an integer. Another way the void keyword is sometimes used is to indicate that you purposefully ignore the returned value of a function. For example, the fclose() function returns an integer that is 0 for success or EOF (a special constant defined in stdio.h) on errors. However, in some cases you might not care if there are errors or not. In those cases you can precede the call with (void) like this: (void) fclose( fp ); This indicates to anyone reading the source code that the return value of fclose() is being purposefully ignored. The (void) is not necessary, a statement without it would do the same thing, with the return value being silently ignored. It is really just a matter of style. Pointers Pointers are variables that “point to” other variables. A pointer must be declared to point to one certain type of variable such as a pointer to integer or pointer to float. A void pointer is a special case as discussed above. Just as an integer variable holds a number, a pointer variable holds a memory address. This is usually the address of another variable, or an area of allocated memory, but it can be any address. Once a pointer is assigned an address, you can access the value that it is pointing to with the dereference operator *. You’ve already seen some examples of pointers in previous tutorials, but let’s look at another simple example: The output of this program is: Address of a = 0xcfbf0404 Value of a = 123 Value of b = 0xcfbf0404 Value of variable that b is pointing to = 123 Value of a = 321 Line 6 in this program declares b as a pointer to an integer and initializes it with the address of integer a. You can see in the output that the value of b is the address of a. The dereference operator on lines 12 and 13 give access to the value that b is pointing to, which is the value of a. In the last line of the program output you can see that the value of a has changed because it was set to 321 on line 13. The address 0 is a special case. A pointer that points to 0 is called a “null pointer”, and there is a constant NULL that denotes a null pointer. Null pointers are used to indicate error conditions. Many functions that return pointers return NULL if there was an error. Code like the lines below are very common: char filename[1024]; FILE *fp; /* code here to set filename to the name of file to be read. */ if (( fp = fopen( filename, "r" )) == NULL ) { fprintf( stderr, "Could not read %s\n", filename ); return 1; } Here the function fopen() is used to open a file. That function returns a pointer to a FILE type, but if fopen() fails for any reason it returns NULL. In that case the code prints an error message and returns. The above example also shows that you can compare any pointer type (in this case a FILE *) to a null pointer without explicit conversion. Normally when comparing different types you would need an implicit or explicit conversion, such as: if (( fp = fopen( filename, "r" )) == (FILE *)NULL ) But when comparing pointers to null pointer that is not necessary. Arithmetic done on pointers take the size of the type pointed to into account. For example, if an implementation’s size of an int is 4 bytes, and the address of integer a is 0x100, the following code: int *p = &a; /* stores 0x100 in p */ p += 8; will increment the address stored in p by 8 * 4 bytes or 32 bytes total, leaving it with a value of 0x100 + 32 = 0x120. Pointers will be covered in more detail in another tutorial. Derived types Derived types are types constructed from the ones we’ve seen so far in this tutorial. There are three main derived types: arrays, structures and unions. Arrays You have probably seen arrays in other programming languages. Arrays are one or more instances of one type. You declare arrays with a type name followed by variable name and then the size of the array in brackets: char name[50]; long long num_array[256]; float m[2][4]; The first two lines declare name as an array of 50 characters and num_array as an array of 256 long longs. The third line declares m as a 2-dimensional array of floats, like a matrix with 2 rows and 4 columns. In C you cannot assign to arrays except when they are first declared or if they are members of a struct. So copying an array like this: int arr1[4] = { 1, 2, 3, 4 }; int arr2[4]; arr2 = arr1; will not work. There are ways around this, like using a for loop to copy each element of the array, or using memcpy(). Array elements are accessed using the [] operator, and elements are numbered starting at 0. It is also possible to declare variable sized arrays (if they meet certain conditions). Let’s see some examples of using arrays: #include <stdio.h> #include <stdlib.h> void main() { int numarr[ 4 ] = { 12, 92, 54, 890 }; int sz; char buff[64]; float m[2][4] = { { 1.1, 1.2, 1.3, 1.4 }, { 2.1, 2.2, 2.3, 2.4 }}; printf( "numarr's 3rd element is %d\n", numarr[2] ); printf( "m's 2nd row's 4th column value is %f\n", m[1][3] ); printf("Enter size of variable sized array: "); fgets( buff, 64, stdin ); sz = atoi(buff); int arr2[ sz ]; int i; for( i = 0; i < sz; ++i ) arr2[ i ] = i+1; for( i = 0; i < sz; ++i ) printf("arr2[%d] = %d\n", i, arr2[ i ]); } The program’s output follows: numarr's 3rd element is 54 m's 2nd row's 4th column value is 2.400000 Enter size of variable sized array: 5 arr2[0] = 1 arr2[1] = 2 arr2[2] = 3 arr2[3] = 4 arr2[4] = 5 On lines 6 and 9 are examples of initializing an array when you declare it. The initial values are inside the { } symbols separated by commas. On line 9, each of the two rows of m is initialized in two separate { } initializers. Lines 11-12 show that to access element N of an array you use index N-1 since the first element has index 0. The third element of numarr is accessed with index [2] and the second row and fourth column of m is accessed with index [1][3]. {mosgoole} Line 18 declares a variable sized array. All the other arrays in this program are constant size – their sizes never change and can be calculated at compile time. The arr2 array’s size depends on the number the user enters when the program is run. On this run the user enters 5 so it is 5 elements long. Lines 20-21 initialize the array so that the element at index 0 gets the number 1, element at 1 gets the number 2, and so on. Lines 23-24 print out the elements. Arrays will be discussed further in another tutorial.
http://www.exforsys.com/tutorials/c-language/c-programming-language-data-types.html
CC-MAIN-2019-35
refinedweb
3,463
59.84
- Part 1 - A Quirk With Implicit vs Explicit Interfaces - Part 2 - What is an OpCode? (this post) - Part 3 - Conditionals and Loops In my last post we say the differences between interface implementations in the IL that is generated, but as a refresher, it looks like this: And the code that created it looked like this: interface ICounter { void Add(int count); } class Counter : ICounter { public int Count { get; set } public void Add(int count) => Count += count; } Today, I want to talk about what the heck that all means, specifically, what makes up the method body, and to do that I want to introduce you to the System.Reflection.Emit namespace and specifically the OpCodes class. Before We Dive In There’s a bit of important background information to understand before we dive too deep into what we’re going to look at today, and that’s how .NET works. .NET languages like C#, F# and VB.NET (as well as others) all output the Common Intermediate Language (CIL) (sometimes referred to as Microsoft Intermediate Language/MSIL) that is then executed by a runtime such as the Common Language Runtime (CLR) using a Just-In-Time (JIT) compiler to create the native code that is executed. So, regardless of whether you’re writing C# or F# it’s all the same at the end of the day and you can convert F# code to C# by reversing the CIL, but it’s probably going to be rather funky. This is how tools like ilspy work. The CIL is defined as part of the Common Language Infrastructure (CLI) which is standardised as ECMA-335 with the primary implementation being the one Microsoft has done for .NET, but there’s nothing stopping someone else making their own implementation (except time…). Before anyone asks, no, I haven’t read ECMA-335. I did use to have it on my kindle but I never did read it. ECMA-262 on the other hand… 😉 CIL is a stack-based bytecode, making it quite low-level and reminds me a lot of the x86 assembly programming I did at university, so if that’s your jam, then we’re in for some fun! If you’ve never had the, err, pleasure of working with assembly, or stack-based machines, the most important thing to know is that you push things onto a stack so that you can read them off again, and you have to read things off in the reverse order that you pushed them on, last-on-last-off style. Now, onto the fun part! OpCodes An OpCode represents an operation in CIL that can be executed by the CLR. These are mostly about working with the stack, but thankfully it’s not all PUSH and POP, we get some higher-level operations to work with, and even basic indexers that we can leverage too (so it’s not quite as sequential as 8086 that I learnt). Our First OpCode Let’s start with this line: IL_0000: ldarg.0 First things first, we can remove the stuff before the :, as that is representing the label of the line, which we could use as a jump point, but we don’t need the labels at the moment so we’ll focus on the instruction: ldarg.0 In .NET this is represented as OpCodes.Ldarg_0 and its role is to push the first argument (of the current method) onto the stack. There’s also ldarg.1, ldarg.2 and ldarg.3 to access the first 4 arguments to a method, with ldarg.s <int> being used to access all the rest. In the future, we’ll see how to use ldarg.s, it’s not for today. So if we’re calling our code like this: counter.Add(1); Inside the Add CIL we’re pushing 1 into the first position on the stack. Calling Functions in CIL The next important piece of CIL to look at is this: call instance int32 UserQuery/Counter::get_Count() This is a method call using the Call OpCode. To use this OpCode we need to provide some more information, the location of the method we’re calling, the return type and finally the method reference. But wait, what method are we calling? We’re calling Counter.get_Count(), but we never wrote that in our C# code, it was generated for us as the property accessor. This method just wraps the backing field (which was also generated for us as we used an auto-property). Since the method is part of the type we’re also part of we use the instance location and it’ll return an int32. And since this is a non- void method call we need to push the return value onto the stack, which is done using ldarg.1, or in .NET Ldarg_1. Adding Numbers With ldarg.1 done we now have two values on the stack, the argument to Add is at index 0 and the current value of Count is at index 1. This means we can add those numbers together, which is what the + operator does, resulting in the following IL: add Bet you didn’t pick that’s what the Add OpCode did! This CIL instruction will also put the resulting value onto the stack, so there’s no need to use an ldarg OpCode after it. Updating Our Property It’s time to update the Count property of our object and again we’ll use the Call OpCode to do it: call instance void UserQuery/Counter::set_Count(int32) As we need to pass an argument to set_Count (the auto-generated assignment function) you might wonder how it gets that, well it gets it off the stack. When a function takes arguments it’ll pop off as many as it requires from the stack to execute, so you need to make sure that when you’re pushing data onto the stack it’s pushed on in the right order, otherwise you can end up with type mismatch, or the wrong value being in the wrong argument. Finally, Add will exit using the Ret OpCode: nop ret I’m not sure why the nop OpCode is included by the compiler, it doesn’t do anything and thus can be omitted. Conclusion Hopefully you’re still with me and have enjoyed dipping your toe into understanding what an OpCode is in CIL. We’ve broken down the 8 lines of CIL that were generated for this one line of C#: Count += count; With our method implementation we used the expression body syntax rather than a traditional method signature. What you may be interested to know is that there is no difference in the IL generated for these two method types, since they are functionally equivalent. The same goes for the use of Count += count vs Count = Count + count, both generate the same CIL as there’s no difference with the addition assignment operator (in our example, there are scenarios when that doesn’t always hold true). It’s important to be able to understand these differences or lack-there-of, so we don’t make arbitrary code style decisions based on preconceived beliefs about how the code is executed. We’ll keep exploring CIL as we go on, so if there’s anything specific you’d like to look into, let me know and we can do some digging!
https://www.aaron-powell.com/posts/2019-09-17-what-is-an-opcode/
CC-MAIN-2019-43
refinedweb
1,235
65.46
Coroutines¶ Coroutines are the recommended way to write asynchronous code in Tornado. Coroutines use the Python yield: (functions using these keywords are also called “native coroutines”). Starting in Tornado 4.3, you can use them in place of most yield-based coroutines (see the following paragraphs for limitations).: async def fetch_coroutine(url): http_client = AsyncHTTPClient() response = await http_client.fetch(url) return response.body The await keyword is less versatile than the yield keyword. For example, in a yield-based coroutine you can yield a list of Futures, while in a native coroutine you must wrap the list in tornado.gen.multi. This also eliminates the integration with concurrent.futures. You can use tornado.gen.convert_yielded to convert anything that would work with yield into a form that will work with await: async def f(): executor = concurrent.futures.ThreadPoolExecutor() await tornado.gen.convert_yielded(executor.submit(g)) While native coroutines are not visibly tied to a particular framework (i.e. they do not use a decorator like tornado.gen.coroutine or asyncio.coroutine), not all coroutines are compatible with each other. There is a coroutine runner which is selected by the first coroutine to be called, and then shared by all coroutines which are called directly with await. The Tornado coroutine runner is designed to be versatile and accept awaitable objects from any framework; other coroutine runners may be more limited (for example, the asyncio coroutine runner does not accept coroutines from other frameworks). For this reason, it is recommended to use the Tornado coroutine runner for any application which combines multiple frameworks. To call a coroutine using the Tornado runner from within a coroutine that is already using the asyncio runner, use the tornado.platform.asyncio.to_asyncio_future adapter. How it works¶() This pattern is most usable with @gen.coroutine. If fetch_next_chunk() uses async def, then it must be called as fetch_future = tornado.gen.convert_yielded(self.fetch_next_chunk()) to start the background processing. Looping¶ Looping is tricky with coroutines since there is no way in Python PeriodicCallback is not normally used with coroutines. Instead, a coroutine can contain a while True: loop and use tornado.gen.sleep: @gen.coroutine def minute_loop(): while True: yield do_something() yield: @gen.coroutine def minute_loop2(): while True: nxt = gen.sleep(60) # Start the clock. yield do_something() # Run while the clock is ticking. yield nxt # Wait for the timer to run out.
http://www.tornadoweb.org/en/branch4.5/guide/coroutines.html
CC-MAIN-2018-13
refinedweb
392
59.5
#include <wx/sysopt.h>: The compile-time option to include or exclude this functionality is wxUSE_SYSTEM_OPTIONS. Default constructor. You don't need to create an instance of wxSystemOptions since all of its functions are static. Gets an option. The function is case-insensitive to name. Returns empty string if the option hasn't been set. Gets an option as an integer. The function is case-insensitive to name. If the option hasn't been set, this function returns 0. Returns true if the given option is present. The function is case-insensitive to name. Returns true if the option with the given name had been set to 0 value. This is mostly useful for boolean options for which you can't use GetOptionInt(name) == 0 as this would also be true if the option hadn't been set at all. Sets an option. The function is case-insensitive to name. Sets an option. The function is case-insensitive to name.
https://docs.wxwidgets.org/3.1.5/classwx_system_options.html
CC-MAIN-2021-31
refinedweb
160
70.7
Although JavaScript supports a data type we call an object, it does not have a formal notion of a class. This makes it quite different from classic object-oriented languages such as C++ and Java. The common conception about object-oriented programming languages is that they are strongly typed and support class-based inheritance. By these criteria, it is easy to dismiss JavaScript as not being a true object-oriented language. On the other hand, we've seen that JavaScript makes heavy use of objects and that it has its own type of prototype-based inheritance. JavaScript is a true object-oriented language. It draws inspiration from a number of other (relatively obscure) object-oriented languages that use prototype-based inheritance instead of class-based inheritance. Although JavaScript is not a class-based object-oriented language, it does a good job of simulating the features of class-based languages such as Java and C++. I've been using the term class informally throughout this chapter. This section more formally explores the parallels between JavaScript and true class-based inheritance languages such as Java and C++.[30] [30]You should read this section even if you are not familiar with those languages and that style of object-oriented programming. [30]You should read this section even if you are not familiar with those languages and that style of object-oriented programming. Let's start by defining some basic terminology. An object, as we've already seen, is a data structure that contains various pieces of named data and may also contain various methods to operate on those pieces of data. An object groups related values and methods into a single convenient package, which generally makes programming easier by increasing the modularity and reusability of code. Objects in JavaScript may have any number of properties, and properties may be dynamically added to an object. This is not the case in strictly typed languages such as Java and C++. In those languages, each object has a predefined set of properties,[31] where each property is of a predefined type. When we are using JavaScript objects to simulate object-oriented programming techniques, we generally define in advance the set of properties for each object and the type of data that each property holds. [31]They are usually called "fields" in Java and C++, but we'll refer to them as properties here, since that is the JavaScript terminology. [31]They are usually called "fields" in Java and C++, but we'll refer to them as properties here, since that is the JavaScript terminology. In Java and C++, a class defines the structure of an object. The class specifies exactly what fields an object contains and what types of data each holds. It also defines the methods that operate on an object. JavaScript does not have a formal notion of a class, but, as we've seen, it approximates classes with its constructors and their prototype objects. In both JavaScript and class-based object-oriented languages, there may be multiple objects of the same class. We often say that an object is an instance of its class. Thus, there may be many instances of any class. Sometimes we use the term instantiate to describe the process of creating an object (i.e., an instance of a class). In Java, it is a common programming convention to name classes with an initial capital letter and to name objects with lowercase letters. This convention helps keep classes and objects distinct from each other in code; it is a useful convention to follow in JavaScript programming as well. In previous sections, for example, we've defined the Circle and Rectangle classes and have created instances of those classes named c and rect. The members of a Java class may be of four basic types: instance properties, instance methods, class properties, and class methods. In the following sections, we'll explore the differences between these types and show how they are simulated in JavaScript. Every object has its own separate copies of its instance properties. In other words, if there are 10 objects of a given class, there are 10 copies of each instance property. In our Circle class, for example, every Circle object has a property r that specifies the radius of the circle. In this case, r is an instance property. Since each object has its own copy of the instance properties, these properties are accessed through individual objects. If c is an object that is an instance of the Circle class, for example, we refer to its radius as: c.r By default, any object property in JavaScript is an instance property. To truly simulate object-oriented programming, however, we will say that instance properties in JavaScript are those properties that are created and/or initialized in an object by the constructor function. An instance method is much like an instance property, except that it is a method rather than a data value. (In Java, functions and methods are not data, as they are in JavaScript, so this distinction is more clear.) Instance methods are invoked on a particular object, or instance. The area( ) method of our Circle class is an instance method. It is invoked on a Circle object c like this: a = c.area( ); Instance methods use the this keyword to refer to the object or instance on which they are operating. An instance method can be invoked for any instance of a class, but this does not mean that each object contains its own private copy of the method, as it does with instance properties. Instead, each instance method is shared by all instances of a class. In JavaScript, we define an instance method for a class by setting a property in the constructor's prototype object to a function value. This way, all objects created by that constructor share an inherited reference to the function and can invoke it using the method invocation syntax shown earlier. A class property in Java is a property that is associated with a class itself, rather than with each instance of a class. No matter how many instances of the class are created, there is only one copy of each class property. Just as instance properties are accessed through an instance of a class, class properties are accessed through the class itself. Number.MAX_VALUE is an example of a class property in JavaScript: the MAX_VALUE property is accessed through the Number class. Because there is only one copy of each class property, class properties are essentially global. What is nice about them, however, is that they are associated with a class and they have a logical niche, a position in the JavaScript namespace, where they are not likely to be overwritten by other properties with the same name. As is probably clear, we simulate a class property in JavaScript simply by defining a property of the constructor function itself. For example, to create a class property Circle.PI to store the mathematical constant pi, we can do the following: Circle.PI = 3.14; Circle is a constructor function, but because JavaScript functions are objects, we can create properties of a function just as we can create properties of any other object. Finally, we come to class methods. A class method is a method associated with a class rather than with an instance of a class; they are invoked through the class itself, not through a particular instance of the class. The Date.parse( ) method (which you can look up in the core reference section of this book) is a class method. You always invoke it through the Date constructor object, rather than through a particular instance of the Date class. Because class methods are not invoked through a particular object, they cannot meaningfully use the this keyword -- this refers to the object for which an instance method is invoked. Like class properties, class methods are global. Because they do not operate on a particular object, class methods are generally more easily thought of as functions that happen to be invoked through a class. Again, associating these functions with a class gives them a convenient niche in the JavaScript namespace and prevents namespace collisions. To define a class method in JavaScript, we simply make the appropriate function a property of the constructor. Example 8-5 is a reimplementation of our Circle class that contains examples of each of these four basic types of members. function Circle(radius) { // The constructor defines the class itself. // r is an instance property, defined and initialized in the constructor. this.r = radius; } // Circle.PI is a class property--it is a property of the constructor function. Circle.PI = 3.14159; // Here is a function that computes a circle's area. function Circle_area( ) { return Circle.PI * this.r * this.r; } // Here we make the function into an instance method by assigning it // to the prototype object of the constructor. // Note: with JavaScript 1.2, we can use a function literal to // define the function without naming it Circle_area. Circle.prototype.area = Circle_area; // Here's another function. It takes two Circle objects as arguments and // returns the one that is larger (i.e.,: var c = new Circle(1.0); // Create an instance of the Circle class c.r = 2.2; // Set the r instance property var a = c.area(); // Invoke the area( ) instance method var x = Math.exp(Circle.PI); // Use the PI class property in our own computation var d = new Circle(1.2); // Create another Circle instance var bigger = Circle.max(c,d); // Use the max( ) class method Example 8-6 is another example, somewhat more formal than the last, of defining a class of objects in JavaScript. The code and the comments are worth careful study. Note that this example uses the function literal syntax of JavaScript 1.2. Because it requires this version of the language (or later), it does not bother with the JavaScript 1.1 compatibility technique of invoking the constructor once before assigning to its prototype object. /* *); In Java, C++, and other class-based object-oriented languages, there is an explicit concept of the class hierarchy. Every class can have a superclass from which it inherits properties and methods. Any class can be extended, or subclassed, so that the resulting subclass inherits its behavior. As we've seen, JavaScript supports prototype inheritance instead of class-based inheritance. Still, JavaScript analogies to the class hierarchy can be drawn. In JavaScript, the Object class is the most generic, and all other classes are specialized versions, or subclasses, of it. Another way to say this is that Object is the superclass of all the built-in classes. All classes inherit a few basic methods (described later in this chapter) from Object. We've learned that objects inherit properties from the prototype object of their constructor. How do they also inherit properties from the Object class? Remember that the prototype object is itself an object; it is created with the Object( ) constructor. This means the prototype object itself inherits properties from Object.prototype! So, an object of class Complex inherits properties from the Complex.prototype object, which itself inherits properties from Object.prototype. Thus, the Complex object inherits properties of both objects. When you look up a property in a Complex object, the object itself is searched first. If the property is not found, the Complex.prototype object is searched next. Finally, if the property is not found in that object, the Object.prototype object is searched. Note that because the Complex prototype object is searched before the Object prototype object, properties of Complex.prototype hide any properties with the same name in Object.prototype. For example, in the class definition shown in Example 8-6, we defined a toString( ) method in the Complex.prototype object. Object.prototype also defines a method with this name, but Complex objects never see it because the definition of toString( ) in Complex.prototype is found first. The classes we've shown in this chapter are all direct subclasses of Object. This is typical of JavaScript programming; there is not usually any need to produce a more complex class hierarchy. When necessary, however, it is possible to subclass any other class. For example, suppose we want to produce a subclass of Complex in order to add some more methods. To do this, we simply have to make sure that the prototype object of the new class is itself an instance of Complex, so that it inherits all the properties of Complex.prototype: // This is the constructor for the subclass. function MoreComplex(real, imaginary) { this.x = real; this.y = imaginary; } // We force its prototype to be a Complex object. This means that // instances of our new class inherit from MoreComplex.prototype, // which inherits from Complex.prototype, which inherits from // Object.prototype. MoreComplex.prototype = new Complex(0,0); // Now add a new method or other new features to this subclass. MoreComplex.prototype.swap = function( ) { var tmp = this.x; this.x = this.y; this.y = tmp; } There is one subtle shortcoming to the subclassing technique shown here. Since we explicitly set MoreComplex.prototype to an object of our own creation, we overwrite the prototype object provided by JavaScript and discard the constructor property we are given. This constructor property, described later in this chapter, is supposed to refer to the constructor function that created the object. A MoreComplex object, however, inherits the constructor property of its superclass, rather than having one of its own. One solution is to set this property explicitly: MoreComplex.prototype.constructor = MoreComplex; Note, however, that in JavaScript 1.1, the constructor property is read-only and cannot be set in this way.
https://docstore.mik.ua/orelly/webprog/jscript/ch08_05.htm
CC-MAIN-2019-18
refinedweb
2,277
56.05
[WAI ER WG] [TOC] This is the homepage for EARL, the Evaluation And Report Language being developed by the W3C WAI ER Group. We expect that EARL will be used more widely as part of the new QA effort at W3C, as the base format for storing results of test runs. EARL is also related to the W3C's Semantic Web Activity. This document is circulated for discussion about EARL only. Version 0.95 is considered to be a stable version of the language, but note that it is only a beta version for people to start implementing. No guarantee of the efficacy of the language is made at this time, or of relationships to future versions. We hope that people will start integrating EARL concepts and syntax in their tools, and give us feedback on the ERT mailing list (archives) about functionalities they would like to see added or removed from this language. The Evaluation And Report Language is an RDF based framework for recording, transferring and processing data about automatic and manual evaluations of resources. The purpose of this is to provide a framework for generic evaluation description formats that can be used in generic evaluation and report tools. The basic statement we want to express is that:- In some contextSome resource (e.g. a Web page, a browser)Evaluates-toSome result (e.g. fails to meet a checkpoint, correctly implements CSS 2 display property) EARL itself comprises of a core model, and extensible vocabulary, of RDF terms. It is our hope that, in the future, other EARL based vocabularies could be made. This modularization means that people aren't constrained to using the properties in the main vocabulary, and that the EARL vocabulary can consist of core terms only. It should be noted though that at the present time, the EARL vocabulary should be adequate for expressing 95% of evaluations. EARL would initially be a means for expressing in a machine readable form ... - Results of evaluating web pages and web sites against the Web Accessibility Content Guidelines - Storing results of running a online test-suite (e.g. SVG) - Results of syntax validation of Web resources (e.g. HTML validator) The language would be extensible to:- - Evaluations of languages, e.g. as defined by their schemas - Evaluations of authoring tools - Repair of above EARL expresses evaluations, and assertions, about all sorts of languages and tools. We make the distinction that we have a "context" for something - so for example "John says that...", and an assertion is the object of this, so for example "MyPage passes X". The evaluation is simply the context and the object(s) put together. EARL, at its base, is a really simple langauge. For example, the following says that "Bob" asserts that his page passes all the priority 1 checkpoints in WCAG 1.0 (and some other information - all in stripped down Notation3 syntax):- @prefix : <> . @prefix earl: <> . @prefix rdf: <> . :Bob earl:asserts [ rdf:subject :MyPage; rdf:predicate earl:passes; rdf:object :WCAG10P1 ]; earl:email <mailto:bob@example.org>; earl:name "Bob B. Bobbington" . :MyPage a earl:WebContent; earl:testSubject <>; earl:lastModified "2001-05-07" . :WCAG10P1 a earl:TestCase; earl:testMode earl:Manual; earl:testCriteria [ earl:suite <>; earl:level <>; earl:note """All Priority 1 checkpoints conformed to means that a document is WCAG Level Single-A compliant""" ] . Note that because EARL is RDF based, it can only make assertions about things that have URIs. There has been some work on using the BNF of languages to transform them into XML syntax so that they may be processed as XML. In this way, it will be possible for EARL to describe and repair all sorts of languages, from XHTML, through CSS, to ECMAScript, and beyond; due to XPath/XPointer. EARL is an application of RDF (Resource Description Format). Part of the reason behind choosing RDF is set out in "Why RDF is not the XML Model". Hence, EARL follows the RDF model, but we make no recommendation as to the syntax of EARL - as long as the (EARL) model is followed. [N.B. As of Dec. 2001, the WG is leaning towards defining XML RDF as the canonical serialization for EARL 1.0.] For example, RDF syntax is not limited to XML RDF for it can take on certain (non-normative, as far as the W3C is currently concerned) plain text forms as well. It may be also be that an EARL evaluation processor (EP) can only support a subset of EARL designed for its own particular task.This is fine as long as the EP does not introduce any features outside of RDF Model and Syntax, or the associated Schemas ontologies and logic. Notation3 (N3 for short) was chosen as the primary syntax to be used in discussions about EARL on the ERT list, which sped up the developmental process somewhat, due to its readability. See the Notation3 specification and N3 Primer for more information. EARL 0.95 is the latest version of EARL, incorporating revisions from EARL 0.9 (see list of changes). EARL 0.9 is now deprecated - please use 0.95 instead (we've made available a guide with rules files to help you make the conversion). Also available are some EARL 0.95 implementation notes. An RDF schema for EARL 0.95 is available (also in Notation3). The persistent namespace for this version of the language is:- Please note the hash (#) on the end (see also: Cool URIs Don't Change). The EARL datatypes (of which there is only one - "Date") are declared in a separate file. We have produced three sets of three examples of EARL 0.95 (in Notation3), one set using contexts, one reification, and the other one converted from the old 0.9 examples. The ER Working Group is currently working on version 1.0 of the language. A non WG endorsed work-in-progress EARL 1.0 specification is available. See EARL Background. Comments are most welcome; please send any questions and comments about the content of this document to the ERT mailing list at w3c-wai-er-ig@w3.org. $Id: Overview.html,v 1.54 2001/12/09 23:04:36 spalmer Exp $
http://www.w3.org/2001/03/earl/
crawl-002
refinedweb
1,029
64.2
Festival and special event management Published: Last Edited: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. IMPACT OF DLF IPL ON THE SOUTH AFRICAN COMMUNITY AND INDIAN ECONOMY INTRODUCTION There exists no such event whose effects can be annulled in any way. No event takes place in an isolated way, defeating the very purpose of the event. The event has direct or indirect influence on every aspect of our lives and these include social, cultural, economic, environmental or political aspects (Allen et al., 2002). The payback from an event is enormous. A lot of constructive and encouraging associations are formed during the event. This is one of the most important reasons for the attractiveness and fame of an event (Bowdin et al., 2006). It is inevitable to measure the various impacts of an event, thus ensuring the proper monitoring, control and evaluation. Recent literatures have revealed an interesting fact, that the methods used to measure the event, and also the aspects measured differ significantly (Wood, E.H., 2005). Primarily, constructive social, cultural as well as economic impacts are normally recognized to be the probable advantage to event hosts (Veres et al., 2008). It is obvious for the hosts to have an inclination towards giving more importance to the economic impact, highly influenced by the tourism research. Economic advantages of an event are very vital to the host. Hence it is very imperative to have good frameworks for the measurement of this aspect. However, an accepted fact is that economic benefits are not the only advantage which comes with an event. Various elusive benefits have to also be measured to know how successful an event has been (Bowdin et al., 2006); Jones (2001) suggests that even if the former unconstructive effects are included having a limelight merely on straight expenses payback will still give an unfinished image. However, it is also to keep in mind that events can sometimes have negative and unplanned consequences and these penalties can lead to the event having both media and public attention for the wrong reasons (Allen et al., 2002). This has to be kept in mind during the planning and execution of the event. The power of media in deciding how an event is shown is formidable. The media can have a strong social and culturalimpactupon society. Thus the media can influence how the event is professed, and also how it is shown to remote audiences (Getz, D., 2007). Events can basically have two kinds of outcomes i.e., positive and negative impact on the host communities and stakeholders (Allen et al., 2002). Event failures can be very devastating, bringing in negative publicity, humiliation and expensive lawsuits (Bowdin et al., 2006). Hence a lot of importance is placed on the financial impacts of an event. Factors leading to this are that both the employers and government need to meet budget goals, deadlines, and also be ready with explanations for the expenditures and an important factor is that financial impacts can be easily measured (Allen et al., 2002). Getz, D. (2002) suggests fours main costs and benefits that have to be evaluated: tangible benefits, tangible costs, intangible benefits and intangible costs. Also the methods of measurement or assessment used vary with the impacts to be measured or assessed. To calculate the overall impact of the event, social and cultural benefits cannot be left out. However, rather than following a statistical approach calculating them may require a narrative approach (Bowdin et al., 2006). The impact of an event is sometimes calculated well before the event actually takes place. This is because in many scenarios, after the event policy focus shifts elsewhere (Jones, 2001). Long-term effects of an event are very crucial. No matter the event being attended or not by the local community, the effects will be felt by them (Ritchie and Smith, 1991). The host society can be provided with a policy for putting forward their knowledge, hosting probable shareholders and endorsing new business opening by the event (Bowdin et al., 2006). These events can create possible employment opportunities during the construction phase (Allen et al., 2002). One of the most important impacts of a mega-event is on the tourism industry which would bring in lot of visitors to a particular place which has never been a tour destination before (Getz, D., 2006). This paper focuses on the various impacts of DLF IPL on the South African community and how it has contributed towards the Indian economy. It also discusses the various advantages and disadvantages that are involved with DLF IPL. OVERVIEW OF DLF IPL The DLF IPL is organised by the well established event management organisation IMG WORLD, LONDON. The Indian Premier League (IPL) has been produced by the joint venture between IMG and the Board of Control for Cricket in India (BCCI). For the IPL IMG explored the most favourable fair as well as mercantile model and a huge amount of $724m were raised by carrying out the notable authorization sale procedure (IMG World, 2009). In India, IPL is one of the most economically victorious sports idea ever initiated. The IPL is played according to the most up-to-date cricket layout which is Twenty20; this decreases match playing time to three hours, and thus makes it ideal for major time television as well as live in-stadia spectators. The television production and distribution rights, franchise rights, event and venue management and sponsorship sales for the IPL is handled by the IMG. The shifting of the venue to South Africa in the year 2009 was taken care by IMG. IMPACT ON INDIAN ECONOMY DUE TO IPL SWITCH TO SOUTH AFRICA All the businesses in India right from the road trader to the publicity organizations holding millions of dollars of shares are faced financial crisis because of the shift of IPL from India to South Africa due to security reasons as the IPL dates conflicted with the general election dates in India. The market analysts sensed that this sudden move from India to South Africa has grinded down from the Indian marker an ample amount, adding up to the already existing despair of global financial slump. Last year the IPL had contributed up to 1 billion rupees to the Indian economy, but due to the shift to South Africa which involved a lot of additional expenditure the BCCI did not incur a lot of income. The media houses in India did undergo a major income loss, which was estimated to be from 500 million to 700 million rupees. The estimated loss when it came to the gate receipts was 500 million rupees. The hospitality industry as well as tourism industry had a major impact. But there was an advantage tagged to this shift as well which was that IPL is now seen by people as a tournament with international value. OVERALL IMPACT OF DLF IPL ON THE SOUTH AFRICAN COMMUNITY According to Getz (2007) all events have a direct social and cultural impact on their participants, and sometime on their wider host communities. But, some events leave a legacy of greater awareness and participation in sporting and cultural activities (Bowdin et al., 2006). The Indian Premier League (IPL) has contributed a lot towards the education in South Africa (The Hindustan Times, 2009). According to Getz (2007) the financial profits are gained when the particular event can pull in extra income for the community benefits which is either in the form of endowments or funding. As stated by Fakir Hassen (2009) Lalit Modi, who is the man behind IPL proclaimed a scholarship of over eight million and this was one of the best programme towards community development in South Africa by a sports oriented organisation. This money given towards education benefits (Torkildsen, G., 2005) has also helped in the initiation of Help Educate and Teach (HEAT) programme that was commenced at the Alexander Sinton High School in the suburb of Athlone. The schools and individual learners will be benefited by this programme. Lalit Modi stated that any attempt towards development and strengthening of individuals as well as nation always remains as a soul of superior education. He also stated that India has emerged as a successful nation because of its strong education basis: "This emphasis on education is now paying off many times over as India has grown into an economic powerhouse far better equipped to lift people out of poverty"(The Hindustan Times, 2009). The investment of DLF IPL in the in the education of South African community targets towards a prospect return (Getz, D., 2007) and cautious analyses of this is vital. According to one of the strategies set out by Bowdin et al. (2006) i.e., Local area strategy; the DLF IPL created a carnival atmosphere by celebrating cultures of the South African community which in turn led to the enhancement of community unity. According to the report by the Hindustan Times 32 schools have benefited from the HEAT programme. For the learners who attended the DLF IPL matches, with the cooperation of the producers of the television five learners were recognized at individual matches and their faces were displayed on the monitors in the stadium. Each one of these received 15,000 rands as part of their school fee. Cricket South Africa (CSA) chief Gerald Majola stated that the IPL model could be used to make this game a global sport event and this in turn would help other set-up's take a great leap as well. The benefits gained economically were considerable. During this period of economic crisis the IPL has built a strong base for the South Africa's tourist industry (Bowdin et al., 2006) and also proved to set out to the world that it is capable of hosting the FIFA 2010. According to Allen et al.(2002) other than the expenses of the event the people who came for the event put in their money on tour, lodgings, and other services in South Africa and there was a increase in hotel room bookings by 40,000 which otherwise is normally very low during winter season in South Africa. The South Africa government is majorly focusing on tourism sector as an upcoming industry that is competent of increasing the economic benefits and employment opportunities (Bowdin et al., 2006). In addition to the tourism produced throughout this event, IPL has also involved a lot of media reporting (Allen et al., 2002) and due to this the South African community profile has gained importance (Getz, D., 2006). The IPL has not only boosted the confidence of the youthful South African cricket players but has also provided with an opportunity to take part in a sporting event that is recognized worldwide (The Business Standard, 2009). "It is still sometimes argued by event 'boosters' that mega events generate benefit from the legacy of infrastructure and venues, but this assertion can easily be wrong" (Getz, D., 2007), because the basic purpose of IPL is very fruitful considering the fact that it brings the cricket stars worldwide who are against each other on nationalized defences into single squad (The Business Standard, 2009). This event has created a long lasting bond between the two countries (India and South Africa). Hosting the IPL in South Africa has not only made IPL a global brand, but has also brought billions of income to the South African economy. SWOT ANALYSIS OF DLF IPL Based on the details in Indian Premier League (2009) the following have been identified to be the: STRENGTHS OF IPL: The Indian Premier League (IPL) follows the Twenty20 format of cricket. This is the shortest version of the game, thereby finishing within two and half hours of game play. Unlike the One day format, which takes a full day to complete, or the Test format spanning five days of play, the Twenty20 is fast-paced and electrifying. Thus pulling in a large crowd to watch the game even on weekdays. Also the IPL has employed people who can really market goods well. These highly trained economists maximize the revenue with their very clean and methodological approaches. This makes IPL an integrated sport. Further each team has players from different countries. This causes a wide range of support of different communities to a single team, thus making cricket globally accepted. The supremacy of the BCCI in the control of ICC has a lot of benefits to DLF IPL. The financial backing from BCCI and also the power to manipulate the dates of international cricket matches favours the IPL. WEAKNESS OF IPL The pace at which people lead their lives now, they hardly have time to lavishly spend on watching a sport. Since IPL has satisfied this need of theirs, people are happier to watch the twenty20 format. Lots of talk has been going on about the status of other formats of the game and how to revive it. But the truth is, IPL has damaged the image of One day cricket and Test cricket. Further, a lot of money is involved in the IPL. Failure of a team can hurt the management's financial position a lot. Teams also spend a lot on advertisement, cost of players, brand promotion. Hence sponsorship is hard to find for their overpriced rates. A team doing well will fare well. If not tough times lie ahead. OPPORTUNITIES OF IPL IPL has a budding fan following. Since it is striking and very attractive, a lot of potential sponsors and advertisers are willing to invest a lot in this event. The IPL has eight leagues. Each being responsible for itself in every sense. Every franchise has to market its team well and get a large fan following behind their team. This in the long term will generate a lot of revenue for them. There is a nice opening for teams to sell their brand name in forms of shirts, accessories and other memorabilia. Another important and vital opportunity for IPL is to target the teenagers. The older people will naturally have a stronger inclination to the traditional form of cricket. But the youth today will like this thrilling and breathtaking format. Each franchise will continue to pay the same fees till 2017-2018. Hence the teams need not worry about inflation, which has been a drawback in India for the past few years. THREATS OF IPL If the top players in world cricket can't be brought into IPL teams, it will lose its popularity. Further, the domestic season in Australia runs concurrent with IPL. If the Australian players are not allowed to choose IPL instead of their local teams, a lot of fan following will be lost. REFERENCES - Allen, J., O'Toole, W., McDonnell, I., Harris, R. (2002) Festival and special event management. 2nd ed. Milton, John Wiley & Sons Australia Ltd. - Author (2009) DLF wins title sponsorship rights for IPL. [Internet] Mumbai, Business- Standard. Available from: [Accessed on 24th December 2009]. - Author (2009) Experience the international sport of cricket. [Internet] IMG, London. Available from: [Accessed on 24th December 20009] - Author (2009) Indian Premier League. [Internet] India, Global Cricket Venture Limited. Available from: [Accessed on 23rd December 2009]. - Author (2009) IPL 2009 is on, Venues may be shifted. [Internet] New Delhi, 24/7 News Network. Available from : [Accessed on 22nd December 2009]. - Author (2009) IPL cricket T20 Competition to Boost SA Economy. 1404. [Internet] Available from: [Accessed on 23rd December 2009]. - Bowdin, G., Allen, J., O'Toole, W., Harris, R., McDonnell, I., (2006) Events Management. 2nd ed. Burlington, Elsevier Butterworth-Heinemann. David Veres, D. & Clark, H. (2008) Increasing the contribution of special events to Niagara's tourism industry. International Journal of Contemporary Hospitality Management .Vol 20(3), pp. 313-319. - Getz, D. (2007) Management of events. In: Event studies:Theory, research and policy for planned events. 1st ed. Burlington, Elsevier Butterworth-Heinemann. - Hassen, F. (2009) IPL boosts South African education too. [Internet] Cape Town, Indo-Asian News Service. Available from:[Accessed on 22nd December 2009]. - Jones, C. (2001) Mega-events and host-region impacts: determining the true worth of the 1999 world cup. International Journal of Tourism Research. Vol. 3, pp.241-51. - Rajan, S. (2009) Indian businesses brace for impact of IPL switch. [Internet] UK, Thomson Reuters. Available from: [Accessed on 24th December 2009]. - Ritchie, J., Smith, B. (1991) The impact of a mega-event on host region awareness: a longitudinal study. Journal of Travel Research. Vol (30)1, pp.3-10. - Sapa (2009) IPL boost for SA economy. 576308. [Internet] Available from: [Accessed on 23rd December 2009]. - Torkildsen, G. (2005) Leisure and recreation management. 5th ed. Oxon, Routledge. - Tum, J., Norton, P., Wright, J.N., (2006) Management of event operations. 2nd ed. Burlington, Elsevier Butterworth-Heinemann. - Wood, E. H. (2005) Measuring the economic and social impacts of local authority events. International Journal of Public Sector Management. Vol 18(1), pp. 37-53.
https://www.ukessays.com/essays/business/festival-and-special-event-management.php
CC-MAIN-2017-17
refinedweb
2,819
62.88
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including seethru with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. This package adds "support" for the lacking alpha channel in HTML5 <video>elements. Formerly known as "jquery-seeThru" Your HTML5 video source is re-rendered into a canvas-element, adding the possibility to use transparencies in your video. Alpha information is either included in the video's source file (moving) or in a seperate <img>-element (static). The package also ships with a simple CLI tool for automatically converting your RGBA video sources into the correct format. Native Support for VP8/WebM-video with Alpha Transparencies has landed in Chrome quite a while ago, so ideally other browser vendors will catch up soon and this script becomes obsolete at some point. You can see the article at HTML5 Rocks and read the discussion about how to use seeThru as a "polyfill" for more information. npm: $ npm install seethru bower: $ bower install seethru CDN: <script src=""></script> <!-- or --> <script src=""></script> Alternatively, use the version(s) in /dist. This approach is a cheap hack! Due to the lack of alpha support in HTML5 video it is one of the few ways to use video with alpha, so it might be the only viable option in some cases, but please don't expect it to work like a charm when processing 30fps 1080p video (or multiple videos) on an old machine with 39 browser tabs opened. Test your usage thoroughly on old machines as well and if you're not satisfied with the speed, maybe think about using a purely native solution. Also: the mobile support of this approach is very limited. seethru-convert m4vissues In default configuration the script assumes that the alpha information is added underneath the original video track (in the exact same dimensions: a video of 400x300 target dimensions will have a 400x600 source file). The alpha information should be a black and white image, with white being interpreted as fully opaque and black being fully transparent (colored input will be averaged). For optimal results the color channel should be un-premultiplied. (see the Wikipedia article on Alpha Compositing for more info on what that is all about). If you need a tool to un-premultiply your imagery you can use Knoll Unmult which is available for quite a lot of packages. If there is no way for you to supply unmultiplied sources, you can use the unmult option, that comes with a severe performance penalty due to un-premultiplying at runtime. For a basic introduction of how to encode and embed video for HTML5 pages see the great Dive into HTML5 Note the jagged edges in the color channel(s) due to un-premultiplying: put over a greenish/blueish background results in Live Demo It is also possible to source the alpha information from an <img>-element. The script lets you use either the luminance information of the RGB channels (i.e. the image) or the image's alpha channel (see options for how to choose). In case the image does not fit your video's dimensions it will be stretched to fit those. To use the script include the source: <script type="text/javascript" src="seeThru.min.js"></script> and then pass your video element (either a selector or an actual DOMElement) and your options to seeThru.create(el[, options]): var transparentVideo = seeThru.create('#my-video'); If you specify dimension-attributes for your video, they will be considered, in case they are left unspecified, the dimensions of the source file will be used (video with alpha included will of course turn out to be halved in height). For testing you can download and use the example videos in the repo's media folder. There are a few options you can pass when building an instance: startdefines the video's behavior on load. It defaults to externalwhich will just display the first frame of the video and wait for the caller to initiate video playback. Other options are clicktoplaywhich will display the first frame of the video until it is clicked. enddefines the video's behavior when it has reached its end. It defaults to stop. Other possibilities are rewindwhich will jump back to the first frame and stop. If you use start: 'clicktoplay'along with rewindor endthe video will be clickable again when finished. staticMasklets you use the content of an <img>node as alpha information (also black and white). The parameter expects a CSS selector (preferrably ID) that refers directly to an image tag, like #fancy-mask. In case the selector matches nothing or a non-image node the option is ignored. alphaMaskspecifies if the script uses either the black and white information (i.e. false) or the alpha information (i.e. true) of the element specified in the maskparameter. Defaults to false. heightcan be used to control the height of the rendered canvas. Overrides the attributes of the <video>-element widthcan be used to control the width of the rendered canvas. Overrides the attributes of the <video>-element postercan be set to trueif you want the video to be replaced by the image specified in the <video>s poster-attribute when in a paused state unmultcan be used if your source material's RGB channels are premultiplied (with black) and you want the script to un-premultiply the imagery. Note that this might have really bad effects on performance, so it is recommended to work with unpremultiplied video sources videoStylesis the CSS (in object notation) that is used to hide the original video - can be updated in order to work around autoplay restrictions. It defaults to { display: 'none' } namespaceis the prefix that will be used for the CSS classnames applied to the created DOM elements (buffer, display, posterframe), defaults to seeThru This might look like this: seeThru.create('#my-video'); or seeThru.create('#my-video', { staticMask: '#image-with-alpha', alphaMask: true }); On the returned seeThru-Object these methods are available: .ready(fn)lets you safely access the instance's methods as it will make sure the video's metadata has been fully loaded and the script was able to initialize. It will be passed the seeThruinstance as 1st argument, the used video as 2nd argument, and the canvas representation as the 3rd one. To ensure consistent behavior this will always be executed asynchronously, even if the video is ready when called. .updateMask(selectorOrElement)lets you swap the alpha source for a video that uses static alpha information. The value for the alphaMaskoption will be kept from initialisation. .revert()will revert the <video>element back to its original state, remove the <canvas>elements, all attached data and event listeners/handlers .play()and .pause()are convenience methods to control the playback of the video .getCanvas()lets you get the visible canvas element so you can interact with it Example: seeThru .create('#my-video', { width: 400, height: 300 }) .ready(function (instance, video, canvas) { canvas.addEventListener('click', function () { instance.revert(); }); video.addEventListener('ended', function () { instance.revert(); }); }); If window.jQuery is present the script will automatically attach itself to jQuery as a plugin, meaning you can also do something like: $('#my-video').seeThru().seeThru('play'); If your jQuery is not global (AMD/browserify) but you still want to attach the script as a plugin you can use the attach method exisiting on seeThru. var $ = require('jquery'); seeThru.attach($); seethru-convert The package ships with a CLI tool ( seethru-convert) that can prepare your video sources for you. Pass a video with alpha information (Animation-compressed .movs should work best here - also make sure the video actually contains information on the alpha channel) and it will automatically separate alpha and RGB information and render them side by side into the target file. To install the CLI tool globally: $ npm install seethru -g To use the script you need to have ffmpeg and ffprobe installed. The executables needs to be in your $PATH ( %PATH for Windows). Alternatively you can pass overrides for the executables used using the --ffmpeg-path and --ffprobe-path options. Now you are ready to go: $ seethru-convert --in myvideo.mov --out myvideo-converted.mov Ideally you should be doing this conversion on your uncompressed / high-quality video source once, and then convert the output into whatever files you need (mp4, ogg et. al.) afterwards. You can also use standard video editing software to prepare the sources. This walkthrough shows how to do it using Adobe After Effects. Put your animation with alpha in a composition: Double the composition's height: Duplicate your footage layer, align them, and use the second instance as Alpha Track Matte for a white solid: Make sure you are using an unmultiplied (straight) version of your color source: Note that the canvas API is subject to cross domain security restrictions, so be aware that the video source files have to be served from the same domain (i.e. if the document that is calling seeThru is on the video files have to be requested from as well), otherwise you will see a DOM Security Exception. Also note that this also applies to subdomains, therefore you cannot mix www and non-www-URLs. This can be worked around when using CORS or by using Blob URLs: function loadAsObjectURL(video, url) { var xhr = new XMLHttpRequest(); xhr.responseType = 'blob'; xhr.onload = function (response) { return video.src = URL.createObjectURL(xhr.response); }; xhr.onerror = function () { /* Houston we have a problem! */ }; xhr.open('GET', url, true); xhr.send(); video.onload = function () { return URL.revokeObjectURL(video.src); }; } var video = document.querySelector('video'); video.addEventListener('loadedmetadata', function () { seeThru.create(video); }); loadAsObjectURL(video, ''); To mimic a behavior as if the original video was still visible it will echo all mouse events fired by the canvas representation. The events that are echoed are: mouseenter mouseleave click mousedown mouseup mousemove mouseover hover dblclick contextmenu focus blur Support for mobile browsers is patchy due to some forcing any video to open in an external player or requiring user interaction. As of iOS 10, a video will work with seeThru if: playsinlineattribute mutedattribute <video id="video" autoplay loop playsinline muted></video> In any case you will need to add the playsinline attribute to the <video> tag. If a video has audio, adding the muted attribute will enable playsinline. The script is tested on Chrome, Firefox, Safari, Opera and IE 9.0+. See caniuse.com for browsers that support <canvas> and <video> All source code is licensed under the MIT License, demo content, video and imagery is available under CC-BY-SA 3.0 Thanks to Jake Archibald, who had the original idea for this approach, Kathi Käppel who designed the lovely Mr. Kolor from the demo and Sebastian Lechenbauer for making fun of my git dyslexia.
https://npm.runkit.com/seethru
CC-MAIN-2020-10
refinedweb
1,836
62.38
56903/how-to-assign-a-groovy-variable-to-a-shell-variable I have the following code snippet in my Declarative Jenkins pipeline wherein I am trying to assign a groovy array variable 'test_id' to a shell array variable 't_id'. script{ def test_id sh """ t_id=($(bash -c \" source ./get_test_ids.sh ${BITBUCKET_COMMON_CREDS_USR} ${BITBUCKET_COMMON_CREDS_PSW} ${env.StoryId} \")) test_id=${("${t_id[@]}")} """ echo "${test_id[0]}" echo "${test_id[1]}" } The shell commands work in the bash shell but when I add the multi line sh step, I am getting the below syntax error when I run it in Jenkins. Can you please help? WorkflowScript: 46: illegal string body character after dollar sign; solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" @ line 46, column 20. sh """ ^ Thanks @Sirajul! Escape sequence works. But, currently, I am unable to get the value assigned to the groovy array test_id from the Shell array variable t_id. How can I access it? script{ def test_id sh """ t_id=(\$(bash -c \" source ./get_test_ids.sh ${BITBUCKET_COMMON_CREDS_USR} ${BITBUCKET_COMMON_CREDS_PSW} ${env.StoryId} \")) echo "\${t_id[*]}" //Returns the array's elements test_id=\${("\${t_id[@]}")} // throws an error as Bad substitution """ echo "${test_id[0]}" //Does not return any value echo "${test_id[1]}" //Does not return any value } @Abhilash try adding this array declaration in your script, something like this, you might have missed on groovy array syntax, Let me know if this works, if not I will try replicating this issue on my side. script{ def test_id = [] sh """ t_id=(\$(bash -c \" source ./get_test_ids.sh ${BITBUCKET_COMMON_CREDS_USR} ${BITBUCKET_COMMON_CREDS_PSW} ${env.StoryId} \")) echo "\${t_id[*]}" //Returns the array's elements test_id=\${("\${t_id[@]}")} """ echo "${test_id[0]}" echo "${test_id[1]}" } See if this could help you @Abhilash This might have something to do with String interpolation I believe. Try adding the shebang line as well, just in case. I have tried the following in my script and it worked Please try the underlined statement to convert the groovy variable to shell script. <groovy variable> = sh (script: '<shell command> ', , returnStdout:true).trim() The output is a string and you can assign this to a shellscript ${<variable name>} reference : Regards RRR PS: If this post has helped you, please upvote it Hi @Raveendiran! I understand your answer but I have a small doubt. If the command docker push command fails(I always end up with errors while pushing an image to the Docker registry), what will be the value of "test_result"? You could use the Post Build Task Plugin ...READ MORE I got the solution for this: pipeline ...READ MORE You can rename pipeline jobs in jenkins ...READ MORE In case you want to change the ...READ MORE If you are talking about executing another ...
https://www.edureka.co/community/56903/how-to-assign-a-groovy-variable-to-a-shell-variable
CC-MAIN-2020-29
refinedweb
438
64.71
JQuery - Multiple versionsChetanya Jain Apr 25, 2012 10:21 PM Hi, CQ5.4 uses jQuery version 1.4.4, and the latest version available is 1.7.2. For some reasons we would like to use 1.7.2 version of the jQuery, but its clashing with the OOB jQuery version. I would like to know: - How do we use jQuery latest pack with CQ5 (noConflict) - Will there be an impact in the author / publish instance if 1.4.4 is replaced with 1.7.2? - Some of the CQ5 libraries are dependent on OOB jQuery (1.4.4) so in such cases how can I work with both the versions without any conflict? Thanks, Chetanya 1. Re: JQuery - Multiple versionsTomalec W Oct 22, 2012 7:49 AM (in response to Chetanya Jain) Hi, I have similar problem, but I'm afraid that we cannot use latest pack with CQ5. There is: // Expose jQuery to the global object window.jQuery = window.$ = jQuery; at the bottom part of /etc/clientlibs/foundation/jquery.js Even they stored previous version of jQuery in // Map over jQuery in case of overwrite _jQuery = window.jQuery, // Map over the $ in case of overwrite _$ = window.$, They do not use it later. Maybe we can try to add jQuery.noConflict(true) just after foundation/jquery.js is loaded, but it's being done asynchronously, sou only idea I have is to overwrite foundation clientLib. 2. Re: JQuery - Multiple versionsRyan Lunka Oct 22, 2012 12:28 PM (in response to Tomalec W)1 person found this helpful The version of jQuery that the CQ platform uses (1.4.4) should not conflict with other versions. This version is namespaced to $CQ and they use .noConflict() to release $. This all happens via ClientLib JS inclusion. If you are extending the foundation page component, which is generally the case, the clientlib inclusion is in headlibs.jsp. Make sure that any further inclusions of JS libraries (including another version of jQuery) occur after this headlibs.jsp is included. Some best practices to keep in mind: - Don't try to include JS libraries by simply requesting them from out in the world. Use Client Library folders. - Make sure you are allowing headlibs.jsp to be included and to run. If that isn't possible or you aren't extending the foundation page component, make sure that the same clientlib is included. - Don't try to replace the OOTB version of jQuery. The CQ platform uses it, so trying to swap it out may cause bad things to happen. - You don't HAVE to, but I highly recommend you namespace any versions of jQuery that YOU include. This will potentially save you some headaches down the road. Again, use clientlibs to include jQuery. - Make your app-specific components use your included version of jQuery. You can use the OOTB one namespaced to $CQ, but I would just leave it alone. Let it exist to support the platform and nothing more. Hope this helps. 3. Re: JQuery - Multiple versionsTomalec W Oct 23, 2012 5:00 AM (in response to Ryan Lunka) Thanks for reply Ryan, - I use CQ5.5, but I need jQuery newer than 1.7. - I use Client Libraries but I still cannot be sure (AFAIK) if some particular clientLib was loaded into html before or after foundation jQuery in admin view. - So do you mean I need to namespace ANY external library I include? So if I have myApp. namespace for one app, myOtherApp. for other app, and if both uses e.g. jQuery or Backbone I have also to create noCQ.Backbone, noCQ.jQuery? I thought that's what you created CQ. and _g namespaces for? - Well, that's the problem I WANT to do so but I cannot find a way to make sure that window.jQuery i the window.jQuery I have included, not the one CQ includes. 4. Re: JQuery - Multiple versionsTomalec W Oct 23, 2012 5:23 AM (in response to Ryan Lunka) I think that issue could be fixed by changing current /etc/foundation/jquery.js l.9319 // Expose jQuery to the global object window.jQuery = window.$ = jQuery; to // Expose jQuery to the global object window.jQuery = window._$ = _jQuery || jQuery; 5. Re: JQuery - Multiple versionsjustin_at_adobe Oct 23, 2012 5:57 AM (in response to Tomalec W)1 person found this helpful Tomalec- I would not recommend changing /etc/foundation/jquery.js as that change will likely be overwritten in an upgrade. It sounds like what you want is for: window.$ = jQuery 1.8.1 window.jQuery = jQuery 1.8.1 window.$CQ = jQuery 1.7 (for CQ's purposes) If so, the best thing to do is create a new client library called (for example) my.jquery which contains jQuery 1.8.1. Set cq.jquery as a dependency of my.jquery. Then when you include my.jquery, you will know that cq.jquery is always included first. Regards, Justin 6. Re: JQuery - Multiple versionsTomalec W Oct 23, 2012 7:05 AM (in response to justin_at_adobe) Thanks Justin, That was my proposition for upgrade of '/etc/foundation/jquery.js' This is exactly what I want to achieve. Sounds quite good, but then I will end with two jQueries include in publish unnecessarily. 7. Re: JQuery - Multiple versionsjoel_triemstra Oct 23, 2012 10:20 AM (in response to Ryan Lunka) "Don't try to include JS libraries by simply requesting them from out in the world. Use Client Library folders." Why is this a better practice than using a version hosted on Google, eg? 8. Re: JQuery - Multiple versionsRyan Lunka Oct 23, 2012 10:27 AM (in response to joel_triemstra) If you are JUST pulling in jQuery, then the difference is negligible. However, you are likely going to have a bunch of your own written JS that use jQuery and you may be including other plugins. You would want to create ClientLibrary dependencies so that they are all pulled in as one file and minified, instead of pulled in as separate requests. 9. Re: JQuery - Multiple versionsjoel_triemstra Oct 23, 2012 10:41 AM (in response to Ryan Lunka) Gotcha, that makes sense. Is the pulling as one file a new feature to v5.5? I'm currently using v5.4, and when I do this for example <cq:includeClientLib I get two .js files included in my page. 10. Re: JQuery - Multiple versionsRyan Lunka Oct 23, 2012 10:52 AM (in response to joel_triemstra)1 person found this helpful No, I believe it was new in CQ5.4. If I remember correctly, clientlib dependencies will pull clientlibs in as separate files. If you want to combine them into one, I would do so in a single app-specific clientlib (generally at etc/designs/<my-app>) that uses "embed" to pull in the relevant libraries. Embed will combine them, where depdencies will not. To embed, on the config node for your clientlib you use the String[] property "embeds" (may not have the 's') and list the categories of clientlibs that you want to embed. Then your cq:includeClientLib script will reference the category of that client library. Someone correct me if that's wrong...I'm going off the top of my head. 11. Re: JQuery - Multiple versionsjoel_triemstra Oct 23, 2012 1:17 PM (in response to Ryan Lunka) Cool, thanks, I hadn't seen that property before (it's "embed" for the record). It looks like that property isn't totally recursive - e.g, if I want to embed A, which depends on B, which in turn depends on C, I need to explicitly embed A, B, and C, in the correct order. Whereas the "dependencies" property would handle that for me. Does that sound like what you saw also? 12. Re: JQuery - Multiple versionsjoel_triemstra Oct 24, 2012 7:47 AM (in response to joel_triemstra) Maybe I should fork this off to another thread, but... another interesting behavior of the embed property I'm seeing is that it appears to not take into account the behavior of the dependencies property, where a library is only included once. In other words, if I have clientlibs A and B, each of which depend on C, which depends on D, I could use the dependencies property, and includeClientLib for A and B, and then C and D would each be included once. However, if I use embed, including A and B leads to C and D each being included twice, which is unfortunate. Did you run into that? 13. Re: JQuery - Multiple versionsRyan Lunka Oct 24, 2012 7:58 AM (in response to joel_triemstra) I'm not sure I've run into that specific situation, but I have run into similar wierdness trying to test the ClientLib functionality. It's not very well documented (as far as I've seen), which makes it more confusing. The approach I take is to keep it simple, because these complicated cases do cause unexpected things to happen. Your first scenario, where you have to explicitly embed A, B, and C in order confuses me. I wouldn't expect that, nor do I understand how you are "specifying" order of embeds. Really the only place I would try to use embed is embedding your component-specific clientlibs into a single app-specific one. Everywhere below that level, I would use dependencies. The functionality should respect that of components A, B, and C all depend on jQuery as clientlib D, that D only be included once. I could be wrong though. I haven't sat down and tested out these complex situations in a while. I would like to see some detailed documentation about this topic. It frequently comes up and I've found very little in the way of best practices or even complete documentation about it. 14. Re: JQuery - Multiple versionsAsa B Oct 26, 2012 8:34 PM (in response to Ryan Lunka) You might give require.js a shot, it was deisgned to handle these scenarios Also, I'm not sure clientlibs lends itself to CDN hosting. 15. Re: JQuery - Multiple versionsjoel_triemstra Oct 30, 2012 11:57 AM (in response to Asa B) Asa B, are you using RequireJS on the server side, or on the client side? 16. Re: JQuery - Multiple versionsAsa B Oct 30, 2012 12:34 PM (in response to joel_triemstra) Hi Joel, Require.js is client side only. However there is server build optimization tool called r.js which duplicates much of the clientlibs functionality. r.js is based on uglify by default. Require may be used with or without r.js depending on your projects needs. Here's a link to a video that does a good job of making this a bit easier to understand
https://forums.adobe.com/message/4794538
CC-MAIN-2019-04
refinedweb
1,790
74.9
Extras/SteeringCommittee/Meeting-20061130 From FedoraProject (10:00:05 AM) thl has changed the topic to: FESCo meeting -- Meeting rules at [WWW] -- Init process (10:00:11 AM) thl: FESCo meeting ping -- abadger1999, awjb, bpepple, c4chris, dgilmore, jeremy, jwb, rdieter, spot, scop, thl, tibbs, warren (10:00:14 AM) thl: Hi everybody; Who's around? (10:00:15 AM) ***dgilmore is here (10:00:17 AM) ***bpepple is here. (10:00:18 AM) ***awjb is here (10:00:21 AM) ***abadger1999 is here (10:00:22 AM) tibbs: I'm here. (10:00:24 AM) ***rdieter is here (10:00:24 AM) amitdey [i=eey@h614287.serverkompetenz.net] entered the room. (10:00:28 AM) ***jwb is here (10:00:32 AM) ***nirik is in the rabble seats. (10:00:41 AM) jeremy: thl: I'm mostly here (10:00:53 AM) dgilmore: thl: i beleive spot is on the road today (10:00:57 AM) thl: k, then let's start (10:01:07 AM) thl has changed the topic to: FESCo meeting -- EPEL - where to upload stuff (dgilmore, mmcgrath) (10:01:08 AM) ***mmcgrath pong (10:01:15 AM) thl: mmcgrath, dgilmore ? (10:01:20 AM) dgilmore: thl: /pub/epel (10:01:26 AM) mmcgrath: We've been requesting builds and they've been working. (10:01:41 AM) mmcgrath: dgilmore: did notting ever get back to you about that? (10:01:48 AM) ***scop is half here (10:02:07 AM) dgilmore: mmcgrath: not yet i need to ping him again on getting the sync process setup so packages can hit the master mirror (10:02:14 AM) thl: dgilmore, did the jeremy, f13 and notting ack /pub/epel ? (10:02:30 AM) dgilmore: thl: notting did (10:02:38 AM) mmcgrath: All in all though I we're ready to announce that people can begin requesting branches. (10:02:55 AM) thl: dgilmore, k (10:03:05 AM) dgilmore: We are ready for branches and builds (10:03:06 AM) thl: if anyone dislikes /pub/epel please yell now (10:03:18 AM) mmcgrath: The plan for now is to branch from FC-3 unless it doesn't exist, in which case branch from devel. (10:03:26 AM) warren: back (10:03:32 AM) thl: mmcgrath, sounds like a good idea (10:03:43 AM) warren: +1 /pub/epel (10:04:00 AM) thl: dgilmore, mmcgrath, shall the FESCo members the beta testers? for one week before we annouce the stuff to a wider audeience? (10:04:10 AM) thl: audience? (10:04:14 AM) dgilmore: thl: :D im ok with that (10:04:18 AM) thl: s/one week/some days/ (10:04:20 AM) mmcgrath: yeah, thats probably a wise idea. (10:04:21 AM) ***c4chris is here now... (10:04:32 AM) mmcgrath: It'll be easier to have the FESCo people yell at us then the community at large :D (10:04:39 AM) warren: thl, Yeah, I need stuff from EPEL personally. (10:04:44 AM) dgilmore: yeah (10:04:53 AM) dgilmore: warren: its being built right now (10:04:55 AM) thl: okay; one week? (10:05:00 AM) mmcgrath: Should I move EnterpriseExtras to /wiki/Extras/EPEL and start filling it with content? (10:05:02 AM) thl: or a shorter timeframe? (10:05:12 AM) bpepple: a week seems reasonable. (10:05:12 AM) dgilmore: one week (10:05:14 AM) ***mmcgrath likes the week intervals (10:05:22 AM) thl: mmcgrath, well, we should leave the schedule page where it is (10:05:24 AM) dgilmore: we can say we are good to go next FESCo meeting (10:05:35 AM) tibbs: Is the procedure for requesting branches the same? (10:05:45 AM) thl: mmcgrath, and please use wiki/EnterpriseExtras (or something like that) (10:05:49 AM) mmcgrath: k, I'll just create a new one. (10:05:53 AM) dgilmore: tibbs: yes but you dont need a bugzilla number (10:05:58 AM) mmcgrath: do you want it out of Extras namespace or in it? (10:06:00 AM) thl: as Extras might not exists anymore in the future ,) (10:06:02 AM) warren: what is the name? (10:06:05 AM) warren: EPEL-3? (10:06:06 AM) mmcgrath: ahhh, good point (10:06:13 AM) dgilmore: EL-4 (10:06:17 AM) dgilmore: EL-5 (10:06:17 AM) warren: ah (10:06:19 AM) amitdey left the room. (10:06:20 AM) warren: k (10:06:41 AM) dgilmore: warren: there is no EL-3 (10:06:44 AM) thl: mmcgrath, dgilmore, will we hae a spepareate owners.list for epel? (10:06:54 AM) thl: mmcgrath, dgilmore, and where is bugzilla for it? (10:06:54 AM) mmcgrath: hmmm (10:06:56 AM) dgilmore: thl: i think we need it (10:07:03 AM) mmcgrath: yeah, we'll need it. (10:07:17 AM) thl: dgilmore, mmcgrath can you work this stuff out until next week? (10:07:19 AM) dgilmore: we will need a bugzilla component for EPEL with the same syncing as extras currently (10:07:21 AM) ***mmcgrath notes there's bugzilla integration that will need to be done. (10:07:28 AM) dgilmore: thl: dure (10:07:32 AM) bakers left the room (quit: "Leaving"). (10:07:38 AM) mmcgrath: with a new owners, warren: who's in charge of that? (10:07:38 AM) thl: dgilmore, mmcgrath, and please take a close look at the schedule page; there are probably other issues that we need to solve sonn (10:07:55 AM) dgilmore: thl: yeah there is (10:08:17 AM) warren: dgilmore, you mean Enterprise Extras with components from Extras owners.list? (10:08:27 AM) warren: even though Extras owners.list will have significantly more stuff than EPEL? (10:08:28 AM) mmcgrath: we can discuss some of the technical details at the fedora-admin meeting today (10:08:31 AM) warren: k (10:08:50 AM) dgilmore: warren: Enterprise extras with components from owners.el.list (10:08:57 AM) dgilmore: or something to the effect (10:09:13 AM) warren: dgilmore, should be possible to do. we'll talk during infrastructure meeting. (10:09:13 AM) thl: mmcgrath, dgilmore, we also need discuss who get's allowed for EPEL before we annouce it to the real public (10:09:14 AM) ***nirik sees the owners.list.el there and thinks about a emacs lisp module. ;) (10:09:22 AM) thl: mmcgrath, dgilmore, can you prepare that soon? (10:09:31 AM) mmcgrath: thl: yeah, we'll have something soon. (10:09:36 AM) dgilmore: warren: we should start a new owners.list it will allow different owners straight away (10:09:37 AM) warren: what happened to that centos extras guy? (10:09:43 AM) thl: mmcgrath, dgilmore thx (10:09:46 AM) ***cweyl wakes his rabble self (10:09:50 AM) warren: dgilmore, agreed (10:10:02 AM) rdieter: warren: z00dax? He's waiting on us to actually *do* something. (: (10:10:08 AM) dgilmore: warren: hes still kinda onboard (10:10:16 AM) BobJensen-Away is now known as BobJensen (10:10:26 AM) dgilmore: rdieter: correct (10:10:47 AM) thl: k, anything else regarding epel? (10:11:08 AM) dgilmore: thl: just request branches and test test test (10:11:30 AM) thl: dgilmore, send a reminder to the fesco list if we don#t test enough ;) (10:11:34 AM) thl has changed the topic to: FESCo meeting -- Opening Core - (warren, jeremy, rdieter) (10:11:40 AM) thl: warren, jeremy, rdieter, any news? (10:12:09 AM) warren: jeremy and jkeating are in private meetings explaining all this with management. I am not privy to the details just yet, but I hear it is going surprisingly good. (10:12:33 AM) f13: meetings to continue Friday (10:12:34 AM) thl: jeremy, f13, thx for your work; I don#t want to do your job ;) (10:12:34 AM) jeremy: thl: nothing yet... soon. hopefully at least some news tomorrow (10:12:53 AM) warren: We should just proceed figuring out what we want for our part. (10:13:03 AM) jwb: all of it (10:13:04 AM) thl: okay, what about hte "future for FESCo" stuff that was discussed on fab? (10:13:05 AM) jwb: we want it all (10:13:13 AM) thl: do we want to discuss this further here? (10:13:20 AM) thl: or wait for a signal from the Board? (10:13:31 AM) jeremy: thl: it's on at least my list of things to discuss in the board meeting tomorrow (10:13:51 AM) thl: jeremy, k, thx (10:14:18 AM) thl: did you like the stuff besides the part "50% ratio for community"? (10:14:22 AM) abadger1999: thl: I liked the direction it was going. (10:14:27 AM) thl: or did I forget anything important? (10:15:06 AM) ***thl takes that as no (10:15:25 AM) warren: I somehow feel that the discussed was a little overdesigned, but no strong feelings. (10:16:11 AM) thl: just for the reference a quick question here: please say with "-1" "0" and "-1" if you like the "50% ratio for community" stuff (10:16:16 AM) thl: just out of interest (10:16:23 AM) bpepple: -1 (10:16:25 AM) ***jeremy abstains (10:16:28 AM) jwb: 0 (10:16:33 AM) abadger1999: thl: -1 (10:16:34 AM) thl: + 0,5 (10:16:38 AM) c4chris: 0 (10:16:42 AM) tibbs: 0 (10:16:53 AM) warren: -1 (10:17:02 AM) rdieter: -1 (10:17:11 AM) thl: okay, thx :) (10:17:23 AM) warren: thl, the approach of asking each specific point like this is probably good though. (10:17:36 AM) tibbs: I have to agree. (10:17:58 AM) thl: well, do we want to go thourgh the whole proposal now? (10:18:18 AM) thl: that would take quite some time... (10:18:23 AM) tibbs: I need to have it in front of me. (10:18:32 AM) thl: I don#t have it in front of me either (10:18:34 AM) bpepple: tibbs: agreed. (10:18:52 AM) thl: if we want to do that let's do it next week (10:18:53 AM) warren: thl, let's focus on the proposal next week? (10:19:06 AM) c4chris: warren, yes (10:19:08 AM) bpepple: warren: +1 (10:19:11 AM) thl: anything else regarding opening core? (10:19:21 AM) warren: thl, set a time limit that everyone knows, so everyone is familiar and knows THAT is when they must like or dislike parts. (10:19:39 AM) thl: well, let's talk a bit about hte size now maybe (10:19:51 AM) thl: two weeks ago there was the idea to make FESCO bigger (10:20:00 AM) thl: now the plan seems to be to make it smaller (10:20:07 AM) warren: eh? (10:20:24 AM) thl: s/plan seems/the idea/ (10:20:27 AM) warren: I understand why some people want that, but I think it is a bad idea. =) (10:20:38 AM) c4chris: warren, what is bad ? (10:20:45 AM) tibbs: I think the size is pretty good as it is. (10:20:46 AM) warren: c4chris, further shrinking FESCo. (10:20:58 AM) c4chris: k (10:21:01 AM) warren: I would prefer the current size or slightly bigger. (10:21:22 AM) c4chris: I wouldn't like it smaller (10:21:31 AM) abadger1999: I think I agree with sopwith that "less is more" on a logical level. But emotionally I have misgivings about getting smaller. (10:21:31 AM) rdieter: it was argued on the list that reducing FESCo size could improve productivity, provided FESCo delegated things more. (: (10:22:02 AM) thl: well, can everyone just through in his prefered number please here? (10:22:03 AM) bpepple: abadger1999: I agree. (10:22:03 AM) abadger1999: We need to delegate more as it is. (10:22:12 AM) thl: My vote: 11 +/-2 (10:22:19 AM) jwb: abadger1999, you might be faced with a smaller FESCo sooner than you think (10:22:21 AM) warren: abadger1999, less is more, when you can count on everyone to always be there. But for a volunteer org, I would prefer to have qualified individuals in FESCo and whoever is available at the time to push forward decisions. (10:22:49 AM) bpepple: thl: Keeping current size. (10:23:13 AM) warren: Size of FESCo could be its own thread. (10:23:19 AM) rdieter: what *is* FESCo's current size (dunno off the top of my head)? (10:23:26 AM) c4chris: thl, 13 is fine with me (10:23:28 AM) thl: rdieter, 13 (10:23:30 AM) bpepple: rdieter: 13. (10:23:44 AM) warren: (10:23:46 AM) rdieter: lucky 13, sounds good to me. (: (10:24:11 AM) warren: even with 13, we struggle to have enough people respond and make decisions happen at any given time. (10:24:26 AM) thl: and do we want to continue as we areuntil F7 or a new election quite soon? (10:24:42 AM) abadger1999: warren: "Silence is consent"? (10:24:43 AM) jwb: i think a new election soon would be good (10:24:49 AM) c4chris: wait before the future is a bit clearer (10:24:52 AM) thl: warren, then we need to get the community more involved instead of makeing FESCo bigger (10:24:53 AM) bpepple: thl: I'd say wait until FC7. (10:24:56 AM) warren: thl, figure out how the new governance with open core will work first. (10:25:29 AM) rdieter: warren: +1 (10:25:32 AM) jeremy: I definitely think we need to see what the future holds... at that point, election/moving things around starts to be more interesting (10:25:44 AM) thl: okay, then let's move on now (10:25:53 AM) warren: we don't need to decide size of FESCo until that point (10:26:00 AM) warren: move on (10:26:11 AM) thl has changed the topic to: FESCo meeting -- MISC -- broken deps (10:26:16 AM) thl: ther are quite some broken deps (10:26:41 AM) tibbs: Some of it is due to rawhide churn, which is OK as long as the bustedness doesn't persist. (10:26:44 AM) mether left the room (quit: Remote closed the connection). (10:26:47 AM) thl: I was told even FESCo members own packages where the deps are broken for quite some time (10:26:52 AM) bpepple: tibbs: agreed. (10:27:02 AM) warren: thl, rawhide only? (10:27:07 AM) thl: well, yes, the last two reports where quire big (10:27:13 AM) thl: warren, not only iirc (10:27:25 AM) thl: does nobody read those reports? (10:27:34 AM) dgilmore: the last one was a rawhide curn. i need to bump and rebuild snort which is about 10 of them (10:27:54 AM) warren: big reports send to a list are less effective than individualized reports sent to each contributor for their specific problems. (10:27:59 AM) tibbs: I read all of them, but I'm not sure I should try to fix them. (10:28:05 AM) rdieter: there seem to be some parted brokenness that I noticed. (10:28:07 AM) nirik: last report: (10:28:09 AM) c4chris: dgilmore, curn ? (10:28:13 AM) thl: warren, the maintainers get them, too (10:28:19 AM) jwb: c4chris, churn (10:28:23 AM) dgilmore: c4chris: churn (10:28:32 AM) Rathann: if I may: I prefer to read those reports on the list than have them sent directly to my inbox (10:28:33 AM) c4chris: oh, sorry... (10:28:36 AM) warren: thl, ah, didn't realize because I didn't receive any. (10:28:42 AM) Rathann: warren: ^ (10:28:43 AM) tibbs: I guess I could fix syck-php again; I did it last time. (10:28:59 AM) warren: Rathann, in the future we may be able to make that configurable (10:29:04 AM) Rathann: cool (10:29:16 AM) warren: Rathann, but in the majority case, individual reports when action is needed is more effective. (10:29:18 AM) nirik: look at how many are > 30 days tho... thats what the complaint was... (10:29:22 AM) dgilmore: what would be nice is when someone down in the low level end does a bump like that they give a heads up email (10:29:29 AM) dgilmore: that way alot of noise can be avoided (10:29:34 AM) bpepple: thl: maybe if a package remains broken for something like 7 days, FESCo needs to step. (10:29:38 AM) tibbs: Do we want to cover individual packages now? (10:30:02 AM) thl: bpepple, that might soon end in a lot of work... (10:30:03 AM) Rathann: warren: majority? how many people did you ask to be able to say that? (10:30:04 AM) jeremy: dgilmore: heads up are being sent for things that I've seen; but it's still going to lead to at least a day of broken things in extras as long as extras has to wait for the rawhide sync (10:30:06 AM) dgilmore: plague can only be fixed by legacy and its low priority (10:30:25 AM) rdieter: gift - 0.11.8.1-6.fc7.i386 (32 days), NOTMYBUG: (10:30:32 AM) bpepple: thl: True, but some maintainers don't seem willing to ask for help based on the report. (10:30:33 AM) jwb: dgilmore, why can't it revert to an older version? (10:30:52 AM) warren: Rathann, it is unrealistic to expect all participants of the project to read daily reports on a list, when 99% of the time it does not concern them. (10:30:53 AM) thl: bpepple, I think we need a QA Sig and/or a release manager that should take care of it (10:31:03 AM) bpepple: thl: That sounds fine. (10:31:15 AM) warren: Rathann, it is more effective to notify the individual in the rare case where their attention is needed, rather than to expect EVERYONE to watch constantly. (10:31:20 AM) thl: but it seems nobody want to do the work :-/ (10:31:25 AM) nirik: 14 packages broken more than 7 days. (10:31:26 AM) thl: maybe we should ask on the list (10:31:30 AM) dgilmore: jeremy: i never got one for libpcap update (10:31:40 AM) Rathann: warren: is it? I wonder... I thought maintainers had some mandatory subscriptions (10:31:45 AM) dgilmore: jwb: it would require epoch so i guess it could be done (10:31:56 AM) jeremy: dgilmore: I know I saw mail about pcap somewhere.... (10:32:09 AM) dgilmore: jeremy: maybe it went to core only people Rathann Rathann|work (10:32:22 AM) daniel_hozac: dgilmore: fedora-maintainers (10:32:23 AM) Rathann: warren: well I don't really care that much as long as it doesn't clutter my inbox (10:32:25 AM) warren: Rathann, in an ideal world yes, but we cannot realistically demand such things from volunteers. (10:32:26 AM) nirik: pcap update mail heads up was on maintainers... (10:32:31 AM) jeremy: dgilmore: I don't own anything that links to it... so it was definitely on a list (10:32:36 AM) thl: is anybody willing to put this topic on his plate and work out a solution? (10:32:41 AM) warren: Rathann, I'm talking about only RARE notifications when something is wrong and you are responsible for fixing it. (10:32:47 AM) dgilmore: ok i missed it (10:32:52 AM) warren: Rathann, not daily reports (10:32:58 AM) c4chris: thl, if we decide QA members can step in and fix old broken deps, we migth find som epeople to do the work... (10:32:59 AM) Rathann: ok then (10:32:59 AM) abadger1999: Rathann: Mandatory subscription but getting people to read the report when it almost never applies to them is the hard part. (10:33:01 AM) warren: Rathann, unless you haven't fixed it for days in a row, then you'll get daily reports. (10:33:09 AM) tibbs: Well, there's one maintainer who has four packages broken for 30+ days. (10:33:24 AM) jima: istr seeing a pcap warning, it just didn't occur to me that some of my newer inherited packages used pcap ;) (10:33:33 AM) thl: c4chris, everybody can do that already ;? allows it (10:33:52 AM) ***rdieter pulls out his cluestick. Who needs to be whacked? (: (10:33:56 AM) c4chris: thl, oh ok. I didn't remember that part (10:34:04 AM) thl: c4chris, np :) (10:34:13 AM) c4chris: k (10:34:26 AM) tibbs: I think the point is that if you can't get to your packages you need to think strongly about orphaning them. (10:35:03 AM) thl: tibbs, I agree, but we need somebody that reminds people about it ;) (10:35:12 AM) bpepple: thl: Agreed. (10:35:13 AM) warren: tibbs, it might just be a problem of notification (10:35:31 AM) thl: shall we set up a official "release manager"? (10:35:37 AM) warren: thl, there is no reason why private individual notification can't happen in an automated fashion. (10:35:54 AM) _wart_ [n=wart@DHCP-126-227.caltech.edu] entered the room. (10:35:55 AM) nirik: warren: it already does... doesnt it? I get them (10:36:04 AM) thl: warren, ? contributors get individual notifications (10:36:08 AM) tibbs: Yes, I got one the other day because of the libpcap churn. (10:36:20 AM) tibbs: I'm sure ixs knows he has many packages that need work or rebuilding. (10:36:25 AM) nirik: ie, p0f broke due to libpcap... I got a email about the broken dep. I updated it. (10:36:31 AM) warren: thl, kind of like security, release manager can be a tedious and thankless task, less fun when your job is to just poke people. It might only work with someone accountable to the role. (10:36:32 AM) thl: we probably should limit the reports to the list to the "long time not fixed" sutt (10:36:35 AM) thl: stuff (10:36:38 AM) tibbs: I'm just not sure why he doesn't orphan them or ask for help. (10:36:49 AM) tibbs: thl: I'd go for that. (10:36:50 AM) warren: thl, err... yeah, I'm a moron. =) (10:36:52 AM) thl: warren, sure; but we can at least try (10:36:59 AM) Belegdol [n=jsikorsk@212.191.172.124] entered the room. (10:37:00 AM) nirik: how about removing broken > 7 days packages? forced orphan (10:37:05 AM) nirik: (for on devel) (10:37:09 AM) warren: for devel, fine (10:37:10 AM) nirik: non devel. ;) (10:37:18 AM) tibbs: non-devel is important. (10:37:26 AM) nirik: agreed. (10:37:27 AM) warren: for non-devel, X days, WARN, Y days, REMOVE (10:37:32 AM) thl: how about vacations? (10:37:34 AM) tibbs: I mean, busted dependencies can break installations. (10:37:39 AM) thl: how checks the cacation page in the wiki? (10:37:49 AM) thl: who does the orhan process? (10:38:05 AM) warren: co-maintainership and granting permissions (even just verbally) other contributors should solve that. (10:38:19 AM) nirik: isn (10:38:30 AM) warren: Often contributors are just doing the Right Thing when something is obviously broken. (10:38:34 AM) nirik: 't it better to remove the broken package and they can fix/push a new one when they get back? (10:38:44 AM) tibbs: I think this is where trusted members of the community just need to step in. (10:38:47 AM) warren: Ownership should not be such a strict concept. (10:38:54 AM) thl: I think we really need to bring this discussion to f-e-l (10:38:59 AM) thl: any volunteers? (10:38:59 AM) bpepple: thl: +1 (10:39:00 AM) tibbs: But the question is whether this hides maintainers who have gone away. (10:39:08 AM) warren: How do you feel with the general idea of... (10:39:09 AM) c4chris: If there are dependencies, I'd prefer a rebuild (10:39:17 AM) warren: broken depenency, X days, WARN, Y days, REMOVE (10:39:35 AM) thl: warren, I tend to agree, but it's not that easy (10:39:42 AM) thl: we are all on vacation now and them (10:39:56 AM) warren: thus co-maintainership, grants of permission, etc. (10:40:15 AM) thl: well, does the co-maintainership stuff from owners.list work these days? (10:40:18 AM) tibbs: Especially in released distros, these things need to get fixed. Even seven days is too long. (10:40:22 AM) thl: or is it still broken? (10:40:31 AM) thl: and does the script send mails to the co-maintainers, too? (10:40:37 AM) warren: Bob asks me, "Hey Warren, your foo package is broken." I say, "Hmm. I'm busy now, do you know how to fix it?" Bob says, "Sure." Warren says, "Go ahead." (10:40:38 AM) thl: tibbs, +1 (10:40:42 AM) bpepple: tibbs: agreed. (10:40:43 AM) tibbs: It works as long as you understand it doesn't do anything. (10:40:48 AM) xris [n=xris@dsl081-161-160.sea1.dsl.speakeasy.net] entered the room. (10:41:04 AM) ***BobJensen never said any such thing (10:41:05 AM) warren: thl, jeremy was interested in fixing the initialcc thing, but got stuck. I need to follow up... (10:41:08 AM) ***warren sends mail about that... (10:41:25 AM) thl: I think we should fix the initialcc thing (10:41:33 AM) thl: and then encourage com-maintainership more (10:41:41 AM) c4chris: thl, yes (10:41:42 AM) thl: s/com/co/ (10:41:46 AM) jeremy: thl: the first thing is to move the script. then it should be pretty fixable :) (10:42:12 AM) thl: okay, we discussed a lot of things now (10:42:12 AM) warren: jeremy, the script is in cvs (10:42:20 AM) thl: someone really needs to sum it up (10:42:27 AM) thl: and post it for discussion on f-e-l (10:43:03 AM) ***gregdek wonders who the secretary is. :) (10:43:10 AM) c4chris: I can do that (10:43:17 AM) tibbs: We don't need a secretary; we have IRC logs. (10:43:24 AM) thl: c4chris, that would be great; thx (10:43:37 AM) thl: c4chris, I'll create a sperate page on the schedule for it (10:43:44 AM) c4chris: thl, k (10:43:54 AM) thl: c4chris, could you please add the most important stuff there ? tia! (10:43:59 AM) thl: k, so let's move on (10:44:00 AM) nirik: I think we should push to fix the existing <=fc6 ones soon... (10:44:06 AM) tibbs: A lot of this discussion goes for the EVR problems as well. (10:44:29 AM) thl: tibbs, yes (10:44:32 AM) c4chris: thl, will do (10:44:43 AM) thl: nirik, are you intersted to just fix it in cvs? (10:45:08 AM) thl: nirik, but warning, people might yell... (10:45:15 AM) nirik: well, someone should if maintainers aren't... (10:45:21 AM) thl: nirik, exactly (10:45:49 AM) thl: I always wanted to do it myself, but did not find the time for it (10:45:56 AM) nirik: I guess I can look at what needs to be done... (10:46:19 AM) thl: nirik, many thx; fixing the most important stuff would be a great start (10:46:22 AM) ***c4chris pases an asbestos +2 suit to nirik (10:46:28 AM) thl: so, let's move on now (10:46:30 AM) ***nirik is pretty flame resistant. (10:46:49 AM) thl has changed the topic to: FESCO meeting -- report from packaging committee (10:47:17 AM) thl: well, there was quite a bit of discussion on the private FESCo list in the past two hours (10:47:26 AM) thl: we should do that in the public in the future (10:47:36 AM) tibbs: Yes, sorry for not having that sent earlier and to the proper place. (10:47:50 AM) tibbs: I'm just going to take responsibility for doing it in the future. (10:47:58 AM) rdieter: is fedora-extras list more appropriate then? (10:48:12 AM) tibbs: thl mentioned fedora-maintainers (10:48:17 AM) thl: sorry, telephone call here (10:48:23 AM) jwb: fedora-maintainers (10:48:34 AM) ***thl only partly here for a moment (10:48:50 AM) tibbs: I still don't think I'll attempt to summarize the discussion, though. (10:49:07 AM) ***rdieter nods, fedora-maintainers makes a little more sense. (10:49:24 AM) rdieter: tibbs: I say summarize anyway, screw anyone who whines. (10:49:36 AM) ***nirik wonders what the general topic was at least... (10:49:51 AM) bpepple: nirik: group tag & comps file. (10:50:05 AM) nirik: ah, that can of worms. ;) (10:50:17 AM) c4chris: nirik, precisely (10:50:21 AM) tibbs: I am avoiding attempting to restate the opinions of a certain person, since that would surely result in my demotivation through massive flaming. (10:51:05 AM) rdieter: tibbs: want to borrow c4chirs' asbestos +2 suit? (: (10:51:08 AM) ***thl still busy on the telephone, sorry (10:51:18 AM) jwb: tibbs, who? (10:51:19 AM) thl: jwb ? (10:51:23 AM) c4chris: tibbs, I'm pretty sure I can guess... (10:51:34 AM) thl: can you take the meeting over for a moment please? (10:51:38 AM) jwb: thl, sure (10:51:40 AM) tibbs: In the end it's not important. (10:51:52 AM) nirik: in a perfect dream world, I would love to see a web interface where there is a page for each package, and maintainer could add tags/comments, users could add comments, people could rate the package, and an rss feed could be used to show updates to the package. (10:52:24 AM) nirik: and a search interface could find packages that match tags or descriptions or commets. (10:52:36 AM) jwb: ok, so the proposal is to make the Group tag optional (10:52:39 AM) c4chris: nirik, feel like working on the package database ? ;-) (10:52:53 AM) dgilmore: jwb: i nack it (10:53:01 AM) jwb: this apparently breaks smart and apt (10:53:08 AM) tibbs: It doesn't break them. (10:53:10 AM) nirik: c4chris: no time I fear... ;) (10:53:16 AM) rdieter: jwb: not necessarily, but does make them less useful. (10:53:17 AM) jeremy: jwb: break is an awfully strong word to use there (10:53:32 AM) jwb: ok, sorry (10:53:34 AM) tibbs: I can put Group: uncategorized on all of my packages and get the same result. (10:53:36 AM) jwb: just trying to summarize (10:54:14 AM) tibbs: I think the bottom line is that this should have gone out for public discussion before the packaging committee voted on it. (10:54:28 AM) jwb: tibbs, perhaps. but what is done is done (10:54:30 AM) |DrJef| [n=jefrey@fedora/Jef] entered the room. (10:54:30 AM) rdieter: to me it's simple: I fail to see what problem is being solved here (and only see new ones being cause by the solution). (10:54:42 AM) jwb: rdieter, i agree (10:54:46 AM) bpepple: rdieter: +1 (10:54:55 AM) rdieter: tibbs: +1 too, it could have been handled better. (10:54:56 AM) tibbs: So we have this meaningless tag that you have to fill in with something. (10:55:15 AM) tibbs: I would like to see Comps settle down first. (10:55:22 AM) dgilmore: tibbs: it need not be meaningless (10:55:29 AM) tibbs: But it is today. (10:55:37 AM) tibbs: Anyeay, please let me finish. (10:55:38 AM) jwb: it is today for some packages (10:55:38 AM) abadger1999: jwb, rdieter: Did my list of reasons make no sense? (10:55:44 AM) tibbs: I would like to see comps settle down first. (10:55:45 AM) jwb: abadger1999, somewhat (10:55:52 AM) bpepple: tibbs: Agreed. I think once the fate of comps is decided might be a better time to look at the group tag. (10:55:57 AM) tibbs: And then figure out how to somehow encode Comps in groups. (10:56:04 AM) c4chris: Yes, let's settle down comps first (10:56:18 AM) c4chris: the killing Group will be a no brainer (10:56:20 AM) tibbs: Either pick the primary comps location or somehow encode all of the comps locations into groups. (10:56:24 AM) c4chris: s/the/then/ (10:56:26 AM) jwb: tibbs, i like that (10:56:27 AM) abadger1999: tibbs: +1 (10:56:54 AM) tibbs: But it's still maintaining the same information in two places. (10:57:03 AM) jwb: for now (10:57:10 AM) tibbs: Perhaps once our packaging database is advanced enough, some of this stuff can be dealt with more cleanly. (10:57:17 AM) rdieter: yeah, the sky is falling. end of the world. I think we have bigger fish to fry. (10:57:23 AM) tibbs: I.e. filling the spec from the database, or filling the database from the spec. (10:57:28 AM) jwb: ok, so how is for letting the comps stuff settle down before ack/nacking this? (10:57:37 AM) jwb: s/how/who (10:57:39 AM) tibbs: rdieter: I agree that there are more important things to spend limited committee time on. (10:57:45 AM) bpepple: jwb +1 (10:57:49 AM) jwb: +2 (10:57:51 AM) tibbs: jwb: +1. (10:57:52 AM) c4chris: jwb, +1 (10:57:54 AM) jwb: gah, +1 (10:57:57 AM) abadger1999: jwb: +1 (10:58:11 AM) rdieter: +1 (10:58:21 AM) tibbs: I think part of the problem is that the packaging committee got tied up in the discussion and failed to ask the important question: (10:58:25 AM) tibbs: why are we voting on this now? (10:58:28 AM) jwb: jeremy, ? (10:58:39 AM) jwb: warren, ? (10:59:10 AM) abadger1999: tibbs: I felt like we voted on it because it was the beginning of the FC7 cycle nad so it was appropriate to modify rpm now. (10:59:33 AM) jeremy: I'm fine with waiting (10:59:42 AM) c4chris: abadger1999, makes sense (10:59:48 AM) ***thl is back, sorry again (10:59:49 AM) abadger1999: tibbs: I just didn't know that the opposition to shifting the relevant information to comps.xml existed. (10:59:51 AM) jeremy: I also think it's okay if rpm is modified to not choke if group _isn't_ present (11:00:05 AM) tibbs: jeremy: +1. (11:00:11 AM) c4chris: jeremy, +1 (11:00:21 AM) abadger1999: jeremy: +1 (11:00:32 AM) jwb: i have no problems with making the technical change (11:00:48 AM) tibbs: I thought of the vote as "let core rpm change to allow it", but it grew beyond that. (11:00:49 AM) jwb: it's the policy change i would like to wait on (11:01:04 AM) tibbs: And no effort was made to distinquish those things. (11:01:12 AM) rdieter: jwb: +1 (11:01:18 AM) thl: jwb, +1 (11:01:28 AM) c4chris: jwb, +1 (11:01:43 AM) tibbs: I can buy into that. +1. (11:01:44 AM) thl: and a discussion on the list should IMHO still be done, too, even if the PC decided on it (11:01:47 AM) abadger1999: jwb: +1 (11:02:11 AM) jwb: thl, yes (11:02:16 AM) thl: s/on it/& already/ (11:02:26 AM) thl: jwb, shall I take over again? (11:02:31 AM) jwb: sure (11:02:49 AM) thl: so the consensus afaics is: we ask the PC to defer the issue for now (11:03:05 AM) thl: but the wrok on technical things to get rid of Groups continues? (11:03:13 AM) bpepple: thl: +1 (11:03:13 AM) thl: that correct? (11:03:16 AM) jwb: +1 (11:03:21 AM) c4chris: thl, +1 (11:03:27 AM) thl: k, hen let's move on (11:03:30 AM) thl: it's getting late (11:03:43 AM) thl has changed the topic to: FESCo meeting -- Sponsorship nominations (11:03:47 AM) thl: any new nominations? (11:03:54 AM) ***bpepple doesn't have any. (11:04:03 AM) c4chris: nope (11:04:10 AM) mether [n=ask@fedora/mether] entered the room. (11:04:20 AM) thl has changed the topic to: FESCo meeting -- Maintainer Responsibility Policy (11:04:35 AM) thl: bpepple sent e-mail to f-e-l about EOL for FC3 & FC4. Not many replied, but MSchwendt did ask the following, which should be decided upon: "EOL as in stop-ship? As in close the build servers for FC-3 and FC-4 and make the push script disable FC-3 and FC-4, too?" (11:04:47 AM) thl: close build servers? (11:04:56 AM) jwb: i believe EOL should mean EOL (11:04:58 AM) thl: + 0.75 (11:05:18 AM) tibbs: That's a tough one. (11:05:18 AM) thl: or are there any good reasons to leave FE3 and FE4 open? (11:05:21 AM) bpepple: I'm fine with closing the build servers. (11:05:41 AM) tibbs: One one hand, the security folks (which is mostly scop at the moment) would love to see them go. (11:05:53 AM) tibbs: On the other hand, epel builds from FC3. (11:06:07 AM) dgilmore: thl: if legacys closes FC-3 and 4 then we should close FE3 and 4 (11:06:09 AM) dgilmore: not before (11:06:14 AM) thl: tibbs, it branches from FE3 and that should be no problem (11:06:15 AM) abadger1999: warren had a good point last week about not slamming the door on a new community legacy. (11:06:17 AM) kushal left the room (quit: Read error: 110 (Connection timed out)). (11:06:22 AM) abadger1999: How does that fit in? (11:06:31 AM) jwb: abadger1999, it doesn't (11:06:39 AM) jwb: abadger1999, we can't wait around for one to step up (11:06:46 AM) rdieter: I think we wait until legacy announces something first. (11:06:49 AM) thl: call out EOL, but leave the builders open until they break ? (11:06:52 AM) kushal [n=kd@125.22.34.30] entered the room. (11:07:31 AM) thl: rdieter, are there any plans from legacy to annouce something about it in the near future? (11:07:47 AM) tibbs: I recall that they're still discussing it. (11:07:52 AM) rdieter: thl: there were grumblings regarding that on the legacy-list today. (11:08:02 AM) tibbs: But it's essentially already done. (11:08:17 AM) rdieter: thl: seems the consensus was when, not if, an announcement will be made. (11:08:19 AM) thl: wwthen let's get back to it next week (11:08:23 AM) thl: it#s quite late already (11:08:36 AM) thl: anything else? (11:08:50 AM) thl has changed the topic to: FESCo meeting -- free discussion around extras (11:08:58 AM) rdieter: (anything else we can put off until next week?) (: (11:09:09 AM) ***c4chris has nothing (11:09:24 AM) thl: ther are some other things on the schedule, but there is no need to discuss them now afaics (11:09:29 AM) jwb: at some point in the future we need to get a show of hands on who plans on sticking around until F7 is out (11:09:36 AM) jwb: in FESCo i mean (11:10:02 AM) bpepple: jwb: you mean for the next election? (11:10:25 AM) jwb: bpepple, until the next election, yes. i mean we need to know if anyone is going to step down (11:10:43 AM) bpepple: jwb: ah. (11:10:53 AM) tibbs: I'll be here as long as the community wants me to be here. (11:11:06 AM) jwb: there have been rumblings from various people. perhaps we could ask on the list (11:11:08 AM) thl: bahh, telephone again (11:11:09 AM) ***c4chris plans to stay (11:11:19 AM) thl: jwb, could you please end the meeting (11:11:25 AM) ***bpepple plans to stay ass well. (11:11:33 AM) bpepple: s/ass/as/ (11:11:35 AM) jwb: thl, yes (11:11:39 AM) abadger1999: jwb: That sounds like a good idea. (11:11:50 AM) jwb: ok, i'll start the thread on the list (11:12:44 AM) thl: sorry, it was the telephone once again (11:12:49 AM) jwb: is there anything else? (11:12:58 AM) thl: I don't think so (11:13:02 AM) thl: let's close for today (11:13:07 AM) jwb: ok sounds good to me (11:13:11 AM) ***thl will end the meeting in 30 (11:13:12 AM) bpepple: thl: +1 (11:13:26 AM) ***thl will end the meeting in 15 (11:13:37 AM) thl: -- MARK -- Meeting End
http://fedoraproject.org/wiki/Extras/SteeringCommittee/Meeting-20061130
CC-MAIN-2014-35
refinedweb
7,125
77.16
Class System relative to Non-Component Creation Class System relative to Non-Component Creation Is the new class system only for UI type Components or should it be used anytime the "new" keyword would be used; for example in creating a new Model, Store, etc. would it be advisable to use "Ext.define" instead of the "new" keyword instantiation. It appears that the examples for the new Data package still use the "new" keyword in creating respective classes and therefore defining the namespace with Ext.ns would still be required. Any thoughts on best practices. Thanks for the reply. I don't know why I was thinking instantiation when I should have been thinking definition as it appears Ext.define is being preferred over Ext.extend. Similar Threads How To switch Development System to Production SystemBy extjs_mahendra in forum Ext 3.x: Help & DiscussionReplies: 4Last Post: 22 Dec 2009, 8:47 AM application design, component creation and efficiencyBy ry.extjs in forum Community DiscussionReplies: 119Last Post: 21 Jun 2009, 10:39 AM [2.x] CSS rule for class x-border-layout-ct needs position:relativeBy Animal in forum Ext 2.x: BugsReplies: 6Last Post: 18 Dec 2008, 3:55 PM Custom Component CreationBy bhardwajkumar_pravesh in forum Ext 2.x: Help & DiscussionReplies: 1Last Post: 4 Nov 2008, 7:44 AM
http://www.sencha.com/forum/showthread.php?124542-Class-System-relative-to-Non-Component-Creation
CC-MAIN-2013-48
refinedweb
219
57.98
The CLR has two different techniques for implementing interfaces. These two techniques are exposed with distinct syntax in C#: interface I { void m(); } class C : I { public virtual void m() {} // implicit contract matching } class D : I { void I.m() {} // explicit contract matching } At first glance, it may seem like the choice between these two forms is a stylistic one. However, there are actually deep semantic differences between the two forms. (C# has at least one other place where a choice of semantics is encoded in what seems to be a stylistic choice. A class constructor can be expressed in C# either as a static constructor method, or as assignments in a set of static field declarations. Depending on this stylistic choice, the class will or will not be marked with tdBeforeFieldInit. This mark " shown as beforefieldinit in ILDASM " affects the semantics of when the .cctor method will be executed by the CLR. This also results in performance differences, particularly in situations like NGEN or domain-neutral code.) In class C, we get a public class method 'm'; that does double duty as the implementation of the interface method. This is all pretty vanilla: .method public hidebysig newslot virtual instance void m() cil managed { // Code size 1 (0x1) .maxstack 0 IL_0000: ret } // end of method C::m But in class D, we see something quite different: .method private hidebysig newslot virtual final instance void I.m() cil managed { .override I::m // Code size 1 (0x1) .maxstack 0 IL_0000: ret } // end of method D::I.m There are several surprising things about this case: - The method is introduced (newslot) with the bizarre incantation of virtual, private and final. - The name of the method isn’t even 'm';. It is 'I.m';. - There is a mysterious 'override’ clause associated with the method body. The method is marked as virtual because the CLR can only implement interface contracts using virtual. At the language level, C# allows non-virtuals to implement interface contracts. How do they get around the CLR restriction? Well, if the class that introduces the non-virtual is in the same assembly as the class that uses that method to implement the interface contract, C# quietly defines the base class’ method as virtual. If the base class that introduced the non-virtual is in a different assembly, then C# generates a virtual thunk in the subtype which delegates to the non-virtual base method. Getting back to our example, I.m is declared as private because it is not available for calling via the class. It can only be called via the interface. I.m is declared as final because C# really doesn’t want to mark the method as virtual. This was forced on them by the architectural decision / implementation restriction that interface contracts can only be implemented by virtual methods. As for the name, C# could have picked anything that’s a legal identifier. This member isn’t available for external binding, since it is private to the class and only accessible through the interface. Since the name I.m'; is insignificant, obviously this isn’t what tells the CLR loader to use this method to satisfy the interface contract. In fact, it’s that mysterious 'override’ clause. This is what’s known as a MethodImpl. It should not be confused with System.Runtime.CompilerServices.MethodImplAttribute, which controls a method’s eligibility for inlining, its synchronization behavior and other details. A MethodImpl is a statement in the metadata that matches a method body to a method contract. Here it is used to match the body I.m with the interface contract I::m. Generally, you will see MethodImpls used in this way to match methods to interfaces. But MethodImpls can be used to match any method body to any contract (e.g. a class virtual slot) provided that: - The contract is virtual - The body is virtual - The body and the MethodImpl are defined on the same class - The contract is defined either on this class or somewhere up the hierarchy (including implemented interfaces). Once again, it’s open to debate whether MethodImpls require virtual contracts and bodies for sound architectural reasons or for temporary implementation reasons. The ECMA spec contains the rules for how interface contracts are satisfied by class methods. This explains how the base class’ layout can be at least partially re-used, and it explains the precedence of the two techniques we’ve seen above (class methods match by name and signature vs. MethodImpls which match methods of any name that have the correct signature). It also mentions one other surprising detail of interface layout. In the example below, we would expect Derived and Redundant to have the same layout. Sure, there’s a redundant mention of interface I on class Redundant, but that seems irrelevant. interface I { void m(); } class A : I { public virtual void m() {} } class Derived : A { public new virtual void m() {} } class Redundant : A, I { public new virtual void m() {} } In fact, it is highly significant. Class A has already satisfied the interface contract for I. Class Derived simply inherits that layout. The new method Derived.m is unrelated to I.m. But in class Redundant, we mention interface I in the implements list. This causes the CLR loader to satisfy the interface contract all over again. In this new layout, Redundant.m can be used to satisfy I.m. If you’re thinking that some of this stuff is pretty subtle, you are right. Normally, developers wouldn’t concern themselves with the different ways that the CLR can satisfy interface contracts. Instead, you would happily code to your language rules and you would trust your IL generator to spit out the appropriate metadata. In fact, one of the reasons we have all these subtle rules in the CLR is so we can accommodate all the different language rules that we encounter. I would have liked to use privatescope here to make sure that the name I pick doesn’t matter, but unfortunately the spec doesn’t allow this. The CLR doesn’t enforce this and actually allows privatescope methods to be virtual, but to my surprise I found that privatescope method can be overriden by name, I wouldn’t have expected this. Oh dear, I’m suspicious that the spec for ‘privatescope’ is not what we (umm, I) actually implemented. The original feature request was that such members could only be accessed by a Def token, not by a Ref token. This was an attempt to approximate the needs of languages like C and C++ which can have static members scoped to methods, files, and modules. And where all those members might have the same names and even the same types. However, that original proposal was broken in several ways. First, it’s legal (though non-optimal) to use Ref tokens to make intra-Module references. Furthermore, the choice of a token style should not be the trigger of a fundamental accessibility concept. And finally, this would preclude any use of reflection to late-bind to these members. Although I didn’t have a scenario in mind, I didn’t like the idea that an accessibility style would prevent late-binding. Finally, I must admit, the implementation using tokens was inconvenient. We don’t have easy access to the tokens by the time we get to our accessibility checks (in part because they don’t exist in the late-bound cases). Therefore I implemented the rule that any code binding to a ‘privatescope’ member must be in the same Module as the target. This was a couple of lines of code and is closer to what our language partners really needed. This means that you can indeed override them by name. But that’s actually a different problem. Some languages (like C++) consider overridability and accessibility to be orthogonal concepts. A supertype can declare a private virtual — then a subtype can provide a public override. Other languages (like C# and VB.NET) consider these two concepts to be linked. If you cannot access a member, you cannot override it. Since this is enforced by their compilers, they assume that this is an enforced CLR behavior. In fact, they may even build security on top of this assumption. Jeroen, it sounds like you are a compiler writer. If you picked up our new release, Have a look at corhdr.h. It contains a new CheckAccessOnOverride bit. This gives you the ability to force accessibility and overridability to be linked for the members you attach it to. If you do this, the CLR will start enforcing the rule that members cannot be overridden if they cannot be accessed. It sounds like this is what were hoping to achieve with ‘privatescope’. I’m writing an open source JVM for .NET and I ran into this when I was dealing with classes that implement interfaces, but don’t implement all the interface’s methods. In the JVM this is actually legal and causes an AbstractMethodError to be thrown when the missing method is called. I wanted to emit a privatescope stub for the interface method that throws the exception. I don’t really remember (this was a while ago), but I think that the reason I ran into the overriding issue was because I wasn’t using newslot where I should. I’m aware of strict, but it wasn’t really what I needed. Is there any chance the spec will be updated to allow virtual privatescope methods? Thanks!
https://blogs.msdn.microsoft.com/cbrumme/2003/05/03/interface-layout/
CC-MAIN-2018-05
refinedweb
1,583
65.01
Variables are not automatically given an intial value by the system, and start with whatever garbage is left in memory when they are allocated. Pointers have the potential to cause substantial damage when they are used without a valid address. For this reason it's important to initialize them. NULL. The standard initialization is to the predefined constant NULL. Using the NULL value as a pointer will cause an error on almost all systems, making the debugging process somewhat easier. NULL is defined in <cstdlib>. #include <cstdlib> . . . Node* head = NULL; // Initialized pointer to NULL. In theory, NULL can be defined to be non-zero. In practice this doesn't happen on any mainstream system. So much code is written making the assumption that NULL is zero, it is hard to image it using anything except 0.
http://www.fredosaurus.com/notes-cpp/pointer-ref/50nullpointer.html
CC-MAIN-2016-22
refinedweb
136
57.98
You can configure Shield to use Public Key Infrastructure (PKI) certificates to authenticate users. This requires clients to present X.509 certificates. To use PKI, you configure a PKI realm, enable client authentication on the desired network layers (transport or http), and map the DNs from the user certificates to Shield roles in the role mapping file. You can use a combination of PKI encryption and username and password authentication. For example, you can enable SSL/TLS on the transport layer and define a PKI realm to require transport clients to authenticate with X.509 certificates, while still authenticating HTTP traffic using usernames and passwords. You can also set shield.transport.ssl.client.auth to optional to allow clients without certificates to authenticate with other credentials. You must enable SSL/TLS to use PKI. For more information, see Setting Up SSL/TLS on a Cluster. Like realms, you configure options for a pki realm in the shield.authc.realms namespace in elasticsearch.yml. To configure PKI realm: Add a realm configuration of type pkito elasticsearch.ymlin the shield.authc.realmsnamespace. At a minimum, you must set the realm typeto pki. If you are configuring multiple realms, you should also explicitly set the orderattribute. See PKI Realm Settings for all of the options you can set for an pkirealm. For example, the following snippet shows the most basic PKI realm configuration: shield: authc: realms: pki1: type: pki With this configuration, any certificate trusted by the SSL/TLS layer is accepted for authentication. The username is the common name (CN) extracted from the DN of the certificate. If you want to use something other than the CN of the DN as the username, you can specify a regex to extract the desired username. For example, the regex in the following configuration extracts the email address from the DN: shield:: shield: authc: realms: pki1: type: pki truststore: path: "/path/to/pki_truststore.jks" password: "changeme" - Restart Elasticsearch. You assign roles for PKI users in the role mapping file stored on each node. You identify a user by the distinguished name in their certificate. For example, the following mapping configuration assigns John Doe the user role: For more information, see Mapping Users and Groups to Roles.
https://www.elastic.co/guide/en/shield/current/pki-realm.html
CC-MAIN-2020-45
refinedweb
371
57.16
With a GLib implementation of the Python asyncio event loop, I can easily mix asyncio code with GLib/GTK code in the same thread. The next step is to see whether we can use this to make any APIs more convenient to use. A good candidate is APIs that make use of GAsyncResult. These APIs generally consist of one function call that initiates the asynchronous job and takes a callback. The callback will be invoked sometime later with a GAsyncResult object, which can be passed to a “finish” function to convert this to the result type relevant to the original call. This sort of API is a good candidate to convert to an asyncio coroutine. We can do this by writing a ready callback that simply stores the result in a future, and then have our coroutine await that future after initiating the job. For example, the following will asynchronously connect to the session bus: import asyncio from gi.repository import GLib, Gio async def session_bus(): loop = asyncio.get_running_loop() bus_ready = loop.create_future() def ready_callback(obj, result): try: bus = Gio.bus_get_finish(result) except GLib.Error as exc: loop.call_soon_threadsafe(bus_ready.set_exception, exc) return loop.call_soon_threadsafe(bus_ready.set_result, bus) Gio.bus_get(Gio.BusType.SESSION, None, ready_callback) return await bus_ready We’ve now got an API that is conceptually as simple to use as the synchronous Gio.bus_get_sync call, but won’t block other work the application might be performing. Most of the code is fairly straight forward: the main wart is the two loop.call_soon_threadsafe calls. While everything is executing in the same thread, my asyncio-glib library does not currently wake the asyncio event loop when called from a GLib callback. The call_soon_threadsafe method does the trick by generating some dummy IO to cause a wake up. Cancellation One feature we’ve lost with this wrapper is the ability to cancel the asynchronous job. On the GLib side, this is handled with the GCancellable object. On the asyncio side, tasks are cancelled by injecting an asyncio.CancelledError exception into the coroutine. We can propagate this cancellation to the GLib side fairly seamlessly: async def session_bus(): ... cancellable = Gio.Cancellable() Gio.bus_get(Gio.BusType.SESSION, cancellable, ready_callback) try: return await bus_ready except asyncio.CancelledError: cancellable.cancel() raise It’s important to re-raise the CancelledError exception, so that it will propagate up to any calling coroutines and let them perform their own cleanup. By following this pattern I was able to build enough wrappers to let me connect to the D-Bus daemon and issue asynchronous method calls without needing to chain together large sequences of callbacks. The wrappers were all similar enough that it shouldn’t be too difficult to factor out the common code.
https://blogs.gnome.org/jamesh/2019/10/07/gasyncresult-with-python-asyncio/
CC-MAIN-2021-17
refinedweb
453
56.86
Ads Via DevMavens The typical web form consists of controls (like labels, buttons, and data grids), and programming logic. In ASP.NET 2.0, there are two approaches to managing these control and code pieces: the single-file page model and the code-behind page model. Regardless of which model you choose, it’s important to understand how the runtime is processing and executing your web forms behind the scenes. In this article, we will examine how web forms move from design time to run-time in ASP.NET 2.0. Visual Studio creates a web form using the code-behind model when you add a new web form to a project and check the “Place code in separate file” checkbox in the Add New Item dialog. Visual Studio will add two files to the project: an aspx file and a .cs or .vb file. If we were creating a Default.aspx web page, the aspx file would look like the following. <%@"> <div> </div> </form> </body> </html> The ASPX page contains the typical markup for a form, including the html, head, and body tags. Any controls we add to the form using the designer will appear inside the ASPX file also. The three key pieces for us to focus on are in the @ Page directive on top. The AutoEventWireup attribute defaults to true, and we will return to this attribute later in the article. The CodeFile attribute is the link to the source code file with the programming logic for this web form, in this case the file is Default.aspx.cs. The runtime will use the CodeFile attribute to determine which source code file it needs to compile for this ASPX page. Likewise, the Inherits attribute tells the runtime the name of the class it will use as a base class for this web form, and again we will see how this works shortly when we see the entire picture. For now, let’s look at the CodeFile (code-behind) file for the ASPX page. using System; using System.Web.UI; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } } Here we find the _Default class (which we saw in the Inherits attribute) and our standard Page_Load event handler which (note: some non-essential using directives have been removed from the listing to conserve space). We typically use the Page_Load even handler to initialize controls and perform data binding. Currently, in VB.NET, the default .vb file will not contain the Page_Load event handler, but you can easily add the event handler using the page event drop down list just above the text editor. In either language, the key to focus on is the ‘partial’ keyword. The partial keyword (new in .NET 2.0) is one of the rare keywords that is present in both C# and VB.NET and carries the same meaning (shocker!). Partial allows us to split a class definition across two or more source code files. In previous version of the .NET framework this was impossible to achieve – a class definition had to exist inside of a single .cs or .vb file. The partial keyword plays an important role because it will allow the runtime to extend the definition of our _Default class with additional members. For instance, any control appearing in the ASPX markup with runat=”server” and an id attribute will ultimately act like a member variable in our class. For example, in the Page_Load event handler we can already access a variable named form1 – form1 represents the HTML form tag in the ASPX, which does have runat=”server” specified. How does all the magic happen? Let’s take a look at what happens when we browse to the web application to execute the form. When an incoming browser request arrives for our web form, the ASP.NET runtime needs to accomplish two tasks. First, the runtime needs to parse and understand the controls we’ve placed in the ASPX file. Parsing involves reading the ASPX portion of the form, and generating source code to create those controls and HTML markup. The second job is then to compile the generated source code. You can actually see the code the runtime generates by poking around in the ‘Temporary ASP.NET Files’ directory, which will reside underneath the framework installation in the \Microsoft.NET\Framework\v2.0.50215\Temporary ASP.NET Files. The following is an excerpt of the file generated by parsing the ASPX at the top of this article. Although a good portion of the actual code is omitted from this example, the snippet will give us a good idea of how the runtime is putting pieces together underneath the covers. public partial class _Default : System.Web.SessionState.IRequiresSessionState protected System.Web.UI.HtmlControls.HtmlForm form1; protected System.Web.Profile.DefaultProfile Profile { get return ((System.Web.Profile.DefaultProfile)(this.Context.Profile)); } protected System.Web.HttpApplication ApplicationInstance return ((System.Web.HttpApplication)(this.Context.ApplicationInstance)); namespace ASP { public class Default_aspx : _Default { public Default_aspx() { // ... } protected override void FrameworkInitialize() base.FrameworkInitialize(); this.__BuildControlTree(this); this.AddWrappedFileDependencies(ASP.Default_aspx.__fileDependencies); this.Request.ValidateInput(); // ... The first piece to notice is the partial class definition at the top of the generated code. This partial class will complete the _Default class definition from our Default.aspx.cs code behind file by adding some additional members. One of these members is a field to represent the HTML form tag on the page (again, because it has a runat=”server” tag). Any other server controls we would place on the form would also become members of the class. The second class (Default_aspx) in this generated file represents the ASPX page itself. This class inherits the _Default class and contains all the code needed to initialize the form, instantiate the server controls, and spit out literal HTML. The runtime will compile both of these classes (_Default and _Default_aspx) into the same assembly. This assembly will be located in the temporary ASP.NET files directory. One last question to answer is this: how does our Page_Load method get invoked? If you examine all of the code in the code-behind and in the generated files you’ll see no use of delegates in C#, nor any code with the Handles clause in VB.NET (these are the constructs generally used to wire up event handlers). The answer is in the AutoEventWireup attribute back in the @ Page directive for Default.aspx. The AutoEventWireup attribute is set to true (and would default to true if not present). When true, the ASP.NET runtime will attempt to attach page methods to events based on the method names. If the method follows the convention of Page_EventName, then the method will be paired with an event by the name of EventName. For instance, the runtime will wire up Page_Load with the Load event, and Page_Init to the Init event. All of this happens without explicit delegates or Handles clauses. In this section we will create a new Default.aspx webform, but leave the “Place code in a separate file” checkbox unchecked. We are also going to drag a Label control onto the form. The IDE will show us an ASPX file with the following contents. <%@ Page Language="C#" %> <script runat="server"> </script> <asp:Label</asp:Label> There is no .cs or .vb file associated with the ASPX file, so the @ Page directive does not need any attributes indicating where the associated source code file resides. All of the logic we want to add to the page can appear inside the server side script tag. For instance, we could change the text of the label control by placing the following code inside the script block. Label1.Text = "Hello World!"; Remember AutoEventWireup defaults to true when not present in the @ Page directive, so the Page_Load method will automatically wire up to the page object’s load event. But what class is Page_Load a part of? We can see exactly what happens by looking for C# source code files in the temporary ASP.NET files directory when the web form executes. Here is an excerpt. public class Default_aspx : System.Web.UI.Page, System.Web.SessionState.IRequiresSessionState { protected System.Web.UI.WebControls.Label Label1; protected System.Web.UI.HtmlControls.HtmlForm form1; protected void Page_Load(object sender, EventArgs e) Label1.Text = "Hello World!"; // .... In the single-file page model, the ASP.NET runtime generates code for only a single class (Default_aspx). Compared to the code-behind model, which used two partial class declarations and an inherited class, the single file model does the same amount of work with fewer pieces. However, since the runtime is generating the extra pieces for us, we shouldn’t consider this an advantage.. by K. Scott Allen Feel free to leave questions or comments about this article on my blog.
http://www.odetocode.com/Articles/406.aspx
crawl-002
refinedweb
1,466
65.52
# Unicorns break into RTS: analyzing the OpenRA source code ![image1.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/dcf/749/5ef/dcf7495efe14b6c2b391d21c3deff1a8.png) This article is about the check of the OpenRA project using the static PVS-Studio analyzer. What is OpenRA? It is an open source game engine designed to create real-time strategies. The article describes the analysis process, project features, and warnings that PVS-Studio has issued. And, of course, here we will discuss some features of the analyzer that made the project checking process more comfortable. OpenRA ------ ![image2.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/5eb/88d/c23/5eb88dc230b02a9f79effc3401ccc649.png) The project chosen for the check is a game engine for RTS in the style of games such as Command & Conquer: Red Alert. More information can be found on the [website](http://www.openra.net/). The source code is written in C# and is available for viewing and using in the [repository](https://github.com/OpenRA/OpenRA). There were 3 reasons for choosing OpenRA for a review. First, it seems to be of interest to many people. In any case, this applies to the inhabitants of GitHub, since the repository has reached the rating of more than 8 thousand stars. Second, the OpenRA code base contains 1285 files. Usually this amount is quite enough to hope to find interesting warnings in them. And third… Game engines are cool. Redundant warnings ------------------ I analyzed OpenRA using PVS-Studio and at first was encouraged by the results: ![image3.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/795/845/e87/795845e8722bff7c57f1c66f5451aeb1.png) I decided that among so many High level warnings, I could definitely find a whole lot of different sapid errors. Therefore, based on them, I would write the coolest and most intriguing article :) But no such luck! One glance at the warnings and everything clicked into place. 1,277 of the 1,306 High level warnings were related to the [V3144](https://www.viva64.com/en/w/v3144/) diagnostic. It gives messages of the type "This file is marked with a copyleft license, which requires you to open the derived source code". This diagnostic is described in more detail [here](https://www.viva64.com/en/w/v3144/). Obviously, I wasn't interested in warnings of such kind, as OpenRA is already an open source project. Therefore, they had to be hidden so that they didn't interfere with viewing the rest of the log. Since I used the Visual Studio plugin, it was easy to do so. I just had to right-click on one of the [V3144](https://www.viva64.com/en/w/v3144/) warnings and select "Hide all [V3144](https://www.viva64.com/en/w/v3144/) errors" in the opening menu. ![image5.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/3f2/0fd/4fe/3f20fd4fea466bc5a86b45d9e32de817.png) You can also choose which warnings will be displayed in the log by going to the "Detectable Errors (C#)" section in the analyzer options. ![image7.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/6e1/690/081/6e1690081d5b6972783e09c78ab36a32.png) To go to them using the plugin for Visual Studio 2019, click on the top menu Extensions->PVS-Studio->Options. Check results ------------- After the [V3144](https://www.viva64.com/en/w/v3144/) warnings were filtered out, there were significantly fewer warnings in the log: ![image8.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/8ce/498/19e/8ce49819ed39a07b3b4ebbd6b58bcacd.png) Nevertheless, I managed to find worthy ones among them. ### Meaningless conditions Quite a few positives pointed to unnecessary checks. This may indicate an error, because people usually don't write this code intentionally. However, in OpenRA, it often looks as if these unnecessary conditions were added on purpose. For example: ``` public virtual void Tick() { .... Active = !Disabled && Instances.Any(i => !i.IsTraitPaused); if (!Active) return; if (Active) { .... } } ``` **Analyzer warning**: [V3022](https://www.viva64.com/en/w/v3022/) Expression 'Active' is always true. SupportPowerManager.cs 206 PVS-Studio quite rightly notes that the second check is meaningless, because if *Active* is *false*, it won't execute. It might be an error, but I think it was written intentionally. What for? Well, why not? Perhaps, what we have here is a temporary solution, which was supposed to be refined later. In such cases, it is quite convenient that the analyzer will remind a developer of such shortcomings. Let's look at another just-in-case check: ``` Pair[] MakeComponents(string text) { .... if (highlightStart > 0 && highlightEnd > highlightStart) // <= { if (highlightStart > 0) // <= { // Normal line segment before highlight var lineNormal = line.Substring(0, highlightStart); components.Add(Pair.New(lineNormal, false)); } // Highlight line segment var lineHighlight = line.Substring( highlightStart + 1, highlightEnd - highlightStart – 1 ); components.Add(Pair.New(lineHighlight, true)); line = line.Substring(highlightEnd + 1); } else { // Final normal line segment components.Add(Pair.New(line, false)); break; } .... } ``` **Analyzer warning**: [V3022](https://www.viva64.com/en/w/v3022/) Expression 'highlightStart > 0' is always true. LabelWithHighlightWidget.cs 54 Again, it is obvious that re-checking is completely pointless. The value of *highlightStart* is checked twice, right in neighbouring lines. A mistake? It is possible that in one of the conditions wrong variables are selected for checking. Anyway, it's hard to say for sure what's going on here. One thing is definitely clear — the code should be reviewed and corrected. Or there should be an explanation if additional check is still needed for some reason. Here is another similar case: ``` public static void ButtonPrompt(....) { .... var cancelButton = prompt.GetOrNull( "CANCEL\_BUTTON" ); .... if (onCancel != null && cancelButton != null) { cancelButton.Visible = true; cancelButton.Bounds.Y += headerHeight; cancelButton.OnClick = () => { Ui.CloseWindow(); if (onCancel != null) onCancel(); }; if (!string.IsNullOrEmpty(cancelText) && cancelButton != null) cancelButton.GetText = () => cancelText; } .... } ``` **Analyzer warning**: [V3063](https://www.viva64.com/en/w/v3063/) A part of conditional expression is always true if it is evaluated: cancelButton != null. ConfirmationDialogs.cs 78 *cancelButton* can be *null* indeed, because the value returned by the *GetOrNull* method is written to this variable. However, it stands to reason that by no means will *cancelButton* turn to *null* in the body of the conditional operator. Yet the check is still present. If you don't pay attention to the external condition, you happen to be in a very strange situation. First the variable properties are accessed, and then the developer decides to make sure if there is *null* or not. At first, I assumed that the project might be using some specific logic related to overloading the "==" operator. In my opinion, implementing something like this in a project for reference types is a controversial idea. Not to mention the fact that unusual behavior makes it harder for other developers to understand the code. At the same time, it is difficult for me to imagine a situation where you can't do without such tricks. Although it is likely that in some specific case this would be a convenient solution. In the Unity game engine, for example, the "*==*" operator is redefined for the *UnityEngine.Object* class. The official documentation available by the [link](https://docs.unity3d.com/ScriptReference/Object-operator_eq.html) shows that comparing instances of this *class* with null doesn't work as usual. Well, the developer probably had reasons for implementing this unusual logic. I didn't find anything like this in OpenRA :). So if there is any meaning in the *null* checks discussed earlier, it is something else. PVS-Studio managed to find a few more similar cases, but there is no need to list them all here. Well, it's a bit boring to watch the same triggers. Fortunately (or not), the analyzer was able to find other oddities. ### Unreachable code ``` void IResolveOrder.ResolveOrder(Actor self, Order order) { .... if (!order.Queued || currentTransform == null) return; if (!order.Queued && currentTransform.NextActivity != null) currentTransform.NextActivity.Cancel(self); .... } ``` **Analyzer warning**: [V3022](https://www.viva64.com/en/w/v3022/) Expression '!order.Queued && currentTransform.NextActivity != null' is always false. TransformsIntoTransforms.cs 44 Once again, we have a pointless check here. However, unlike the previous ones, this is not just an extra condition, but a real unreachable code. The *always true* checks above didn't actually affect the program's performance. You can remove them from the code, or you can leave them – nothing will change. Whereas in this case, the strange check results in the fact that a part of the code isn't executed. At the same time, it is difficult for me to guess what changes should be made here as an amendment. In the simplest and most preferable scenario, unreachable code simply shouldn't be executed. Then there is no mistake. However, I doubt that the programmer deliberately wrote the line just for the sake of beauty. ### Uninitialized variable in the constructor ``` public class CursorSequence { .... public readonly ISpriteFrame[] Frames; public CursorSequence( FrameCache cache, string name, string cursorSrc, string palette, MiniYaml info ) { var d = info.ToDictionary(); Start = Exts.ParseIntegerInvariant(d["Start"].Value); Palette = palette; Name = name; if ( (d.ContainsKey("Length") && d["Length"].Value == "*") || (d.ContainsKey("End") && d["End"].Value == "*") ) Length = Frames.Length - Start; else if (d.ContainsKey("Length")) Length = Exts.ParseIntegerInvariant(d["Length"].Value); else if (d.ContainsKey("End")) Length = Exts.ParseIntegerInvariant(d["End"].Value) - Start; else Length = 1; Frames = cache[cursorSrc] .Skip(Start) .Take(Length) .ToArray(); .... } } ``` **Analyzer warning**: [V3128](https://www.viva64.com/en/w/v3128/) The 'Frames' field is used before it is initialized in constructor. CursorSequence.cs 35 A nasty case. An attempt to get the *Length* property value from an uninitialized variable will inevitably result in the *NullReferenceException*. In a normal situation, it is unlikely that such an error would have gone unnoticed – yet the inability to create an instance of the class is easily detected. But here the exception will only be thrown if the condition ``` (d.ContainsKey("Length") && d["Length"].Value == "*") || (d.ContainsKey("End") && d["End"].Value == "*") ``` is true. It is difficult to judge how to correct the code so that everything is fine. I can only assume that the function should look something like this: ``` public CursorSequence(....) { var d = info.ToDictionary(); Start = Exts.ParseIntegerInvariant(d["Start"].Value); Palette = palette; Name = name; ISpriteFrame[] currentCache = cache[cursorSrc]; if ( (d.ContainsKey("Length") && d["Length"].Value == "*") || (d.ContainsKey("End") && d["End"].Value == "*") ) Length = currentCache.Length - Start; else if (d.ContainsKey("Length")) Length = Exts.ParseIntegerInvariant(d["Length"].Value); else if (d.ContainsKey("End")) Length = Exts.ParseIntegerInvariant(d["End"].Value) - Start; else Length = 1; Frames = currentCache .Skip(Start) .Take(Length) .ToArray(); .... } ``` In this version, the stated problem is absent, but only the developer can tell to what extent it corresponds to the original idea. ### Potential typo ``` public void Resize(int width, int height) { var oldMapTiles = Tiles; var oldMapResources = Resources; var oldMapHeight = Height; var oldMapRamp = Ramp; var newSize = new Size(width, height); .... Tiles = CellLayer.Resize(oldMapTiles, newSize, oldMapTiles[MPos.Zero]); Resources = CellLayer.Resize( oldMapResources, newSize, oldMapResources[MPos.Zero] ); Height = CellLayer.Resize(oldMapHeight, newSize, oldMapHeight[MPos.Zero]); Ramp = CellLayer.Resize(oldMapRamp, newSize, oldMapHeight[MPos.Zero]); .... } ``` **Analyzer warning**: [V3127](https://www.viva64.com/en/w/v3127/) Two similar code fragments were found. Perhaps, this is a typo and 'oldMapRamp' variable should be used instead of 'oldMapHeight' Map.cs 964 The analyzer detected a suspicious fragment associated with passing arguments to the function. Let's look at the calls separately: ``` CellLayer.Resize(oldMapTiles, newSize, oldMapTiles[MPos.Zero]); CellLayer.Resize(oldMapResources, newSize, oldMapResources[MPos.Zero]); CellLayer.Resize(oldMapHeight, newSize, oldMapHeight[MPos.Zero]); CellLayer.Resize(oldMapRamp, newSize, oldMapHeight[MPos.Zero]); ``` Oddly enough, the last call passes *oldMapHeight*, not *oldMapRamp*. Of course, not all such cases are erroneous. It is quite possible that everything is written correctly here. But you will probably agree that this place looks unusual. I'm inclined to believe that there is an error for sure. *Note by a colleague [Andrey Karpov](https://www.viva64.com/en/b/a/andrey-karpov/). I don't see anything strange in this code :). It's a classic [last line mistake](https://www.viva64.com/en/b/0260/)!* If there is no error, then one should add some explanation. After all, if a snippet looks like an error, then someone will want to fix it. ### True, true and nothing but true The project revealed very peculiar methods, the return value of which is of the *bool* type. Their uniqueness lies in the fact that they return *true* under any conditions. For example: ``` static bool State( S server, Connection conn, Session.Client client, string s ) { var state = Session.ClientState.Invalid; if (!Enum.TryParse(s, false, out state)) { server.SendOrderTo(conn, "Message", "Malformed state command"); return true; } client.State = state; Log.Write( "server", "Player @{0} is {1}", conn.Socket.RemoteEndPoint, client.State ); server.SyncLobbyClients(); CheckAutoStart(server); return true; } ``` **Analyzer warning**: [V3009](https://www.viva64.com/en/w/v3009/) It's odd that this method always returns one and the same value of 'true'. LobbyCommands.cs 123 Is everything OK in this code? Is there an error? It looks extremely strange. Why haven't the developer used *void*? It's not surprising that the analyzer finds such a place strange, but we still have to admit that the programmer actually had a reason to write this way. Which one? I decided to check where this method is called and whether its returned *always true* value is used. It turned out that there is only one reference to it in the same class – in the *commandHandlers* dictionary, which has the type ``` IDictionary> ``` During the initialization, the following values are added to it ``` {"state", State}, {"startgame", StartGame}, {"slot", Slot}, {"allow_spectators", AllowSpectators} ``` and others. Here we have a rare (I'd like to think so) case of static typing that creates problems for us. After all, to make a dictionary in which the values are functions with different signatures… is at least challenging. *commandHandlers* is only used in the *InterpretCommand* method: ``` public bool InterpretCommand( S server, Connection conn, Session.Client client, string cmd ) { if ( server == null || conn == null || client == null || !ValidateCommand(server, conn, client, cmd) ) return false; var cmdName = cmd.Split(' ').First(); var cmdValue = cmd.Split(' ').Skip(1).JoinWith(" "); Func a; if (!commandHandlers.TryGetValue(cmdName, out a)) return false; return a(server, conn, client, cmdValue); } ``` Apparently, the developer intended to have the universal possibility to match strings to certain operations. I think that the chosen method is not the only one, but it is not so easy to offer something more convenient/correct in such a situation. Especially if you don't use *dynamic* or something like that. If you have any ideas about this, please leave comments. I would be interested to look at various solutions to this problem:). It turns out that warnings associated with *always true* methods in this class are most likely false. And yet… What disquiets me here is this ''most likely'' :) One has to really be careful and not miss an actual error among these positives. All such warnings should be first carefully checked, and then marked as false if necessary. You can simply do it. You should leave a special comment in the place indicated by the analyzer: ``` static bool State(....) //-V3009 ``` There is another way: you can select the warnings that need to be marked as false, and click on "Mark selected messages as False Alarms" in the context menu. ![image10.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/4f7/a90/64b/4f7a9064be37561dd36d299e47885630.png) You can learn more about this topic in the [documentation](https://www.viva64.com/en/m/0017/). ### Extra check for null? ``` static bool SyncLobby(....) { if (!client.IsAdmin) { server.SendOrderTo(conn, "Message", "Only the host can set lobby info"); return true; } var lobbyInfo = Session.Deserialize(s); if (lobbyInfo == null) // <= { server.SendOrderTo(conn, "Message", "Invalid Lobby Info Sent"); return true; } server.LobbyInfo = lobbyInfo; server.SyncLobbyInfo(); return true; } ``` **Analyzer warning**: [V3022](https://www.viva64.com/en/w/v3022/) Expression 'lobbyInfo == null' is always false. LobbyCommands.cs 851 Here we have another method that always returns *true*. However, this time we are looking at a different type of a warning. We have to pore over such places with all diligence, as there is no guarantee that we deal with redundant code. But first things first. The *Deserialize* method never returns *null* – you can easily see this by looking at its code: ``` public static Session Deserialize(string data) { try { var session = new Session(); .... return session; } catch (YamlException) { throw new YamlException(....); } catch (InvalidOperationException) { throw new YamlException(....); } } ``` For ease of reading, I have shortened the source code of the method. You can see it in full by clicking on the [link](https://github.com/OpenRA/OpenRA/blob/f642cead441446e16e565ac855b49186a899c253/OpenRA.Game/Network/Session.cs). Or take my word for it that the *session* variable doesn't turn to *null* under any circumstances. So what do we see at the bottom part? *Deserialize* doesn't return *null*, and if something goes wrong, it throws exceptions. The developer who wrote the *null* check after the call was of a different mind, apparently. Most likely, in an exceptional situation, the *SyncLobby* method should execute the code that is currently being executed… Actually, it is never executed, because *lobbyInfo* is never *null*: ``` if (lobbyInfo == null) { server.SendOrderTo(conn, "Message", "Invalid Lobby Info Sent"); return true; } ``` I believe that instead of this "extra" check, the author still needs to use *try*-*catch*. Or try another tack and write, let's say, *TryDeserialize*, which in case of an exceptional situation will return *null*. ### Possible NullReferenceException ``` public ConnectionSwitchModLogic(....) { .... var logo = panel.GetOrNull("MOD\_ICON"); if (logo != null) { logo.GetSprite = () => { .... }; } if (logo != null && mod.Icon == null) // <= { // Hide the logo and center just the text if (title != null) title.Bounds.X = logo.Bounds.Left; if (version != null) version.Bounds.X = logo.Bounds.X; width -= logo.Bounds.Width; } else { // Add an equal logo margin on the right of the text width += logo.Bounds.Width; // <= } .... } ``` **Analyzer warning**: [V3125](https://www.viva64.com/en/w/v3125/) The 'logo' object was used after it was verified against null. Check lines: 236, 222. ConnectionLogic.cs 236 As for this case, I'm sure as hell there is an error. We are definitely not looking at "extra" checks, because the *GetOrNull* method can indeed return a null reference. What happens if *logo* is *null*? Accessing the *Bounds* property will result in an exception, which was clearly not part of the developer's plans. Perhaps, the fragment needs to be rewritten in the following way: ``` if (logo != null) { if (mod.Icon == null) { // Hide the logo and center just the text if (title != null) title.Bounds.X = logo.Bounds.Left; if (version != null) version.Bounds.X = logo.Bounds.X; width -= logo.Bounds.Width; } else { // Add an equal logo margin on the right of the text width += logo.Bounds.Width; } } ``` This option is quite simple for comprehension, although the additional nesting doesn't look too great. As a more comprehensive solution, one could use the null-conditional operator: ``` // Add an equal logo margin on the right of the text width += logo?.Bounds.Width ?? 0; // <= ``` By the way, the first version looks more preferable to me. It is easy to read it and triggers no questions. But some developers appreciate brevity quite highly, so I also decided to cite the second version as well :). ### Maybe, OrDefault after all? ``` public MapEditorLogic(....) { var editorViewport = widget.Get("MAP\_EDITOR"); var gridButton = widget.GetOrNull("GRID\_BUTTON"); var terrainGeometryTrait = world.WorldActor.Trait(); if (gridButton != null && terrainGeometryTrait != null) // <= { .... } var copypasteButton = widget.GetOrNull("COPYPASTE\_BUTTON"); if (copypasteButton != null) { .... } var copyFilterDropdown = widget.Get(....); copyFilterDropdown.OnMouseDown = \_ => { copyFilterDropdown.RemovePanel(); copyFilterDropdown.AttachPanel(CreateCategoriesPanel()); }; var coordinateLabel = widget.GetOrNull("COORDINATE\_LABEL"); if (coordinateLabel != null) { .... } .... } ``` **Analyzer warning**: [V3063](https://www.viva64.com/en/w/v3063/) A part of conditional expression is always true if it is evaluated: terrainGeometryTrait != null. MapEditorLogic.cs 35 Let's delve into this fragment. Note that each time the *GetOrNull* method of the *Widget* class is used, a *null* equality check is performed. However, if *Get* is used, there is no check. This is logical – the *Get* method doesn't return *null*: ``` public T Get(string id) where T : Widget { var t = GetOrNull(id); if (t == null) throw new InvalidOperationException(....); return t; } ``` If the element is not found, an exception is thrown – this is reasonable behavior. At the same time, the logical option would be to check the values returned by the *GetOrNull* method for equality to the null reference. In the code above, the value returned by the *Trait* method is checked for *null*. Actually it is inside the *Trait* method where *Get* of the *TraitDictionary* class is called: ``` public T Trait() { return World.TraitDict.Get(this); } ``` Can it be that this *Get* behaves differently from the one we discussed earlier? Well, the classes are different. Let's check it out: ``` public T Get(Actor actor) { CheckDestroyed(actor); return InnerGet().Get(actor); } ``` The *InnerGet* method returns an instance of *TraitContainer*. The *Get* implementation in this class is very similar to *Get* of the *Widget* class: ``` public T Get(Actor actor) { var result = GetOrDefault(actor); if (result == null) throw new InvalidOperationException(....); return result; } ``` The main similarity is that *null* is never returned here either. If something goes wrong, an *InvalidOperationException* is similarly thrown. Therefore, the *Trait* method behaves the same way. Yes, there may just be an extra check that doesn't affect anything. Except that it looks strange, but you can't say that this code will confuse a reader much. But if the check is needed indeed, then in some cases an exception will be thrown unexpectedly. It is sad. So in this fragment it seems more appropriate to call, for example, *TraitOrNull*. However, there is no such method:). But there is *TraitOrDefault*, which is the equivalent of *GetOrNull* for this case. There is another similar case related to the *Get* method: ``` public AssetBrowserLogic(....) { .... frameSlider = panel.Get("FRAME\_SLIDER"); if (frameSlider != null) { .... } .... } ``` **Analyzer warning**: [V3022](https://www.viva64.com/en/w/v3022/) Expression 'frameSlider != null' is always true. AssetBrowserLogic.cs 128 The same as in the code considered earlier, there is something wrong here. Either the check is really unnecessary, or one still needs to call *GetOrNull* instead of *Get*. ### Lost assignment ``` public SpawnSelectorTooltipLogic(....) { .... var textWidth = ownerFont.Measure(labelText).X; if (textWidth != cachedWidth) { label.Bounds.Width = textWidth; widget.Bounds.Width = 2 * label.Bounds.X + textWidth; // <= } widget.Bounds.Width = Math.Max( // <= teamWidth + 2 * labelMargin, label.Bounds.Right + labelMargin ); team.Bounds.Width = widget.Bounds.Width; .... } ``` **Analyzer warning**: [V3008](https://www.viva64.com/en/w/v3008/) The 'widget.Bounds.Width' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 78, 75. SpawnSelectorTooltipLogic.cs 78 It seems that if the *textWidth != cachedWidth* condition is true, *widget.Bounds.Width* must be written to a specific value for this case. However, an assignment made below, regardless of whether this condition is true, makes the string ``` widget.Bounds.Width = 2 * label.Bounds.X + textWidth; ``` pointless. It is likely that the author just forgot to write *else* here: ``` if (textWidth != cachedWidth) { label.Bounds.Width = textWidth; widget.Bounds.Width = 2 * label.Bounds.X + textWidth; } else { widget.Bounds.Width = Math.Max( teamWidth + 2 * labelMargin, label.Bounds.Right + labelMargin ); } ``` ### Checking the default value ``` public void DisguiseAs(Actor target) { .... var tooltip = target.TraitsImplementing().FirstOrDefault(); AsPlayer = tooltip.Owner; AsActor = target.Info; AsTooltipInfo = tooltip.TooltipInfo; .... } ``` **Analyzer warning**: [V3146](https://www.viva64.com/en/w/v3146/) Possible null dereference of 'tooltip'. The 'FirstOrDefault' can return default null value. Disguise.cs 192 When is *FirstOrDefault* usually used instead of *First*? If the selection is empty, *First* throws an *InvalidOperationException*. *FirstOrDefault* doesn't throw an exception, but returns *null* for the reference type. The *ITooltip* interface implements various classes in the project. Thus, if *target.TraitsImplementing()* returns an empty selection, *null* is written to *tooltip*. Accessing the properties of this object, which is executed next, will result in a *NullReferenceException*. In cases where the developer is sure that the selection won't be empty, it is better to use *First*. If one isn't sure, it's worth checking the value returned by *FirstOrDefault.* It is rather strange that we don't see it here. After all, the values returned by the *GetOrNull* method mentioned earlier were always checked. Why didn't they do it here? Who knows?.. All right, the developer will answer these questions for sure. In the end, it is the code author who will be fixing it :) Conclusion ---------- OpenRA somehow turned out to be a project that was nice and interesting to scan. The developers did a lot of work and didn't forget that the source code should be easy to view. Of course, we did find some… controversies, but one can't simply do without them :) At the same time, even with all the effort, alas, developers remain people. Some of the considered warnings are extremely difficult to notice without using the analyzer. It is sometimes difficult to find an error even immediately after writing it. Needless to say, how hard it is to search for error after a long time. Obviously, it is much better to detect an error than its consequences. To do this, you can spend hours rechecking a huge number of new sources manually. Well, and have a bit of a look at the old ones — what if there is an oversight there? Yes, reviews are really useful, but if you have to view a large amount of code, then you stop noticing some things over time. And it takes a lot of time and effort. ![image11.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/a46/368/3fa/a463683fadadc18c68557b79dea8d3a9.png) Static analysis is just a convenient addition to other methods of checking the quality of source code, such as code review. PVS-Studio will find "simple" (and sometimes tricky) errors instead of a developer, allowing people to focus on more serious issues. Yes, the analyzer sometimes gives false positives and is not able to find all the errors. But with it you will save a lot of time and nerves. Yes, it is not perfect and sometimes makes mistakes itself. However, in general, PVS-Studio makes the development process much easier, more enjoyable, and even (unexpectedly!) cheaper. In fact, you don't need to take my word for it — it's much better to make sure that the above is true yourself. You can use the [link](https://www.viva64.com/en/pvs-studio-download/) to download the analyzer and get a trial key. What could be simpler? Well, that's it for this time. Thanks for your attention! I wish you clean code and an empty error log!
https://habr.com/ru/post/514964/
null
null
4,384
52.36
As a web developer, you will encounter situations that call for an effective token replacement scheme. Token replacement is really just a technologically-savvy way of saying string replacement, and involves substituting a "real" value for a "token" value in a string. This article presents a powerful approach to token replacement in ASP.NET 2.0. The goal is to create a centralized token replacement mechanism that works in ASP.NET controls, HTML controls, static HTML markup, and in text placed on the page from the code-behind. In other words, in any situation imaginable. If you want to skip the background info and get right into the replacement mechanism, then jump to the section titled Acquiring All ASP.NET Page Content. Otherwise, let's get some background on the various token replacement options in ASP.NET. Basic token replacement concepts The ultimate goal of token replacement is to make the value of a string dynamic. You begin with a static string containing tokens, and then update those tokens with replacement values to produce your dynamic string. For example, you may have the following string in an application that emails users a new password if theirs has been lost: Dear [$NAME$], You recently requested a password reset. Your new password is [$PASSWORD$]. Please keep this password in a safe location and quit losing it. Thank You In this scenario, the application has two values to communicate to the user: their name (to personalize the email) and their new password (because they need it to login). Let's say we wish to assign the value "Matt" to the [$NAME$] token and "5ZQS76Bv" to the [$PASSWORD$] token. The resulting text would look like this: Dear Matt, You recently requested a password reset. Your new password is 5ZQS76Bv. Please keep this password in a safe location and quit losing it. Thank You One question that might arise is, why not just use concatenation to build the string? Concatenation is certainly faster, but it comes down to a question of maintainability. Concatenation can only occur in code, so you would have to hard-code the majority of the string, like this: "Dear " + Name + ",\r\nYou recently requested a password reset. Your new password is " + password + ". Please keep this password in a safe location and quit losing it.\r\n\r\nThank You" What happens if you wanted to update the email content? You would have to update your code, recompile it, and redeploy it. Plus, a complex HTML email built directly in C# source is not the easiest thing to debug and maintain. The token replacement approach allows you to store the content in a separate file, read it into your application, and replace the tokens in code. This separates the content from the application and allows you to make updates in a less convoluted environment and without having to recompile or redeploy the application. Token / String Replacement in ASP.NET There are several ways to replace strings in ASP.NET 2.0, but I'm only going to touch on the three that I consider the most popular: - Inline Script - The String.Format method - The String.Replace and StringBuilder.Replace instance methods Each of these are outlined in more detail below. Inline Script Although ASP.NET has moved to a code-behind model, you can still write inline script just as you could in original ASP. Inline script still has its uses and can be extremely helpful for string replacement scenarios. If you think about it, inline script is essentially a really powerful string replacement mechanism that finds your script in a string (i.e. the page markup), runs that script, and then replaces the script with the value it produces. Here's our original example modified for inline script: Dear <%=Name%>, You recently requested a password reset. Your new password is <%=Password%>. Please keep this password in a safe location and quit losing it. Thank You One issue with inline script is that you cannot always use it in conjunction with ASP.NET controls. For example, let's say that you wanted a standard ASP.NET button on the page to say "Matt, Click Here to Email Your Password." You cannot do the following: <asp:Button Nor can you use the mechanism from inside your code-behind. So, it's useful for making a single page, but not for our global string-replacement needs. The String.Format Method The String.Format method accepts a string containing tokens followed by a list of replacement values for the tokens found in the string. The tokens in the string are numbers surrounded by curly brackets, where the number corresponds to the index of the replacement value passed into the method. String.Format("Name: {0}, Gender: {1}, Age:{2}, Height:{3}, Weight:{4}", "Matt", "Male", "25", "5'10\"", "160"); //OUTPUT-->Name: Matt, Gender: Male, Age: 25, Height: 5'10", Weight: 160 The first token, {0}, corresponds to the first replacement value "Matt", the second token, {1}, corresponds to "Male", and so on. You can have an infinite number of tokens and replacement values, but you have to have at least as many replacement values as you have tokens. In other words, when the function encounters a number in curly brackets, {N}, there had better be a corresponding replacement value for that token or else the function throws an exception. // Throws an exception because there are 4 tokens // but only one replacement value String.Format("Name: {0}, Gender: {1}, Age:{2}, Height:{3}, Weight:{4}", "Matt"); Tokens do not need to appear in order, you can use a token more than once in the string, and you can have more replacement values than tokens: String.Format("{1}{3}{2}{2}{0}", "O","H","L","E","X","Y","Z","!"); //OUTPUT-->HELLO In this example, the token, {2}, appears twice, the tokens are not in order, and the "X", "Y", "Z", and "!" replacement values are never used. You can easily put tokens for the string.Format method in ASP.NET controls, HTML controls, static HTML markup, and code in the code-behind, so it is a candidate for use in our global string replacement mechanism. My biggest issue is that the tokens are non-intuitive. When you see a token like [$NAME$], you have some idea what it represents, whereas the token, {7}, communicates very little. The String.Replace and StringBuilder.Replace Methods Lastly, we have the Replace methods found on the String and StringBuilder instances. Both of these methods operate using the same basic logic. You provide a string containing a token, identify the token, supply a replacement value for the token, and the method replaces any instances of the token with the replacement value. We'll begin by looking at the Replace method on a String instance. Since the Replace method is only available on a String instance and not from the String class itself, you start by creating a String instance. Then you call the Replace method from the instance by passing in a token and replacement value, and the method returns a string containing the replacements. Your original string, however, remains unchanged. If you want to update your string with the replacement, you have to assign the result of the Replace function back to your string, as demonstrated in the following example: string myString = "Hello, my name is [$NAME$]."; //This does NOT change the value of myString myString.Replace("[$NAME$]", "Matt"); //You have to assign the value of the Replace function back //to the string to change the value myString = myString.Replace("[$NAME$]", "Matt"); You call the Replace method on a StringBuilder instance in the exact same way as a string: by passing in a token and replacement value. However, the Replace method on a StringBuilder instance operates directly on that StringBuilder instance's value, so you do not need to assign the result of the Replace function back to your StringBuilder to pick up the changes: StringBuilder myBuilder = new StringBuilder("Hello, my name is [$NAME$]."); myBuilder.Replace("[$NAME]","Matt"); Tokens like [$NAME$] are fairly intuitive, can be placed in ASP.NET controls, HTML controls, static HTML markup, and code in your code behind. So it's the option we're going to run with for building the global string replacement mechanism for our ASP.NET application. Acquiring All ASP.NET Page Content Our biggest objective in building a global string replacement mechanism is that it needs to work everywhere. If you put a token directly in your HTML, it needs to be replaced. If you assign a token to an ASP.NET control from a code-behind, then it needs to be replaced. If you put a token in a database and a control pulls that value from the database and displays it, then it needs to be replaced. So… how do you go about doing that? When it comes right down to it, all of the architecture and code for ASP.NET revolves around building a giant string to send to the browser. ASP.NET controls, HTML, code, values from a database, they all end up as part of a string containing the source for a page. All we have to do is intercept that string before ASP.NET sends it to the browser and run our replacements. And it really doesn't take all that much code. Since we want this mechanism to be available to all of the pages in an application, we'll create a new class named TokenReplacementPage that derives from System.Web.UI.Page. Any pages requiring token replacement functionality just need to derive from TokenReplacementPage instead of System.Web.UI.Page. Following is the code for the TokenReplacementPage class: Code Listing 1 – TokenReplacementPage class using System; using System.IO; using System.Text; using System.Web.UI; public abstract class TokenReplacementPage : Page { protected override void Render(HtmlTextWriter writer) { //Create our own mechanism to store the page output StringBuilder pageSource = new StringBuilder(); StringWriter sw = new StringWriter(pageSource); HtmlTextWriter htmlWriter = new HtmlTextWriter(sw); base.Render(htmlWriter); //Run replacements RunPageReplacements(pageSource); RunGlobalReplacements(pageSource); //Output replacements writer.Write(pageSource.ToString()); } protected void RunGlobalReplacements(StringBuilder pageSource) { pageSource.Replace("[$SITECONTACT$]", "John Smith"); pageSource.Replace("[$SITEEMAIL$]", "john.smith@somecompany.com"); pageSource.Replace("[$CURRENTDATE$]", DateTime.Now.ToString("MM/dd/yyyy")); } protected virtual void RunPageReplacements(StringBuilder pageSource) { } } First, note that the TokenReplacementPage class is an abstract class that derives from System.Web.UI.Page. This means that it has all the standard Page functionality as well as token replacement features. All you have to do to confer token replacement functionality to a page is inherit from the TokenReplacementPage class instead of the Page class. There are three methods inside the TokenReplacementPage class: - The Render method – to write the HTML page source to a StringBuilder - The RunGlobalReplacements method – to perform global token replacements on the source - RunPageReplacements – to allow for page-specific token replacements Writing the Page Source to StringBuilder The overridden Render method has a single HtmlTextWriter parameter named writer. An HtmlTextWriter allows you to write HTML to a stream. In this case, the underlying stream is sent to the browser, so the writer parameter is your conduit for outputting HTML to the people viewing your page. Normally, the Render method iterates through all of the controls on the page and passes the writer parameter to the Render method on each individual control. As each control executes its Render method, it writes the HTML for that section of the page to the browser. Since we want to capture the entire page source before it gets to the browser, we need to do a little work to re-route the page source into a stream that we can access. To start, we create a "stream" that we can work with: a StringBuilder instance named pageSource. Next, we create a new StringWriter named sw and pass pageSource into its constructor. This initializes sw with pageSource as its underlying stream. Anything written to sw is output to pageSource. Next, we create a new HtmlTextWriter named htmlWriter and pass sw into its constructor. This initializes htmlWriter with sw as its underlying TextWriter. Thus, anything written to htmlWriter is written to sw, which is then written to pageSource. Finally, we pass htmlWriter to the Base.Render method and allow the page to render as it normally would. After Base.Render finishes, the entire page source is available for modification in pageSource. After acquiring the page source, the overridden Render method passes pageSource to two methods that are responsible for actually making the substitutions: RunPageReplacements and RunGlobalReplacements. We will discuss these in more detail shortly. Once the replacements have been made, the only thing left to do is send the updated content to the browser. We do that by using pageSource to write in the last line of code in the overridden Render method. Next, let's take a look at making the actual replacements. Global Replacements You make global replacements, that is the replacements you want to make on every page that inherits from the TokenReplacementPage class, in the RunGlobalReplacements method. All you have to do to make a replacement is call the Replace method on the pageSource parameter and pass in the token and the replacement value: pageSource.Replace("TOKEN", "REPLACEMENT VALUE"); The Replace method then searches through the string in pageSource and replaces any instances of the token with the replacement value. Remember to make your token something that is unlikely to normally appear in the page. For example, [$NAME$] is a much better choice than just NAME because it's very unlikely a normal sentence would contain a word with brackets and dollar signs around it. You don't want to accidentally mistake a normal word in a sentence for a token. Page Specific Replacements There may be times when you want to run page-specific replacements. Notice that the RunPageReplacements method in the TokenReplacementPage class is marked as virtual and contains no code. This allows you to override the RunPageReplacements method on the page in which you want to make page-specific token replacements. The replacements are made in the exact same fashion as described in the Global Replacements section, but they are only applied to that specific page: Code Listing 2 – Overridden RunPageReplacements Example public partial class PageSpecificReplacementsPage : TokenReplacementPage { protected override void RunPageReplacements( System.Text.StringBuilder pageSource) { pageSource.Replace("[$PAGESPECIFICTOKEN$]", "This replacement only runs on this page!"); } } Checking out the demo application Download the demo application (from the Code Download link in the box to the right of the article title) and extract it to a location of your choosing on your hard drive. Start Visual Studio, and open the web site from wherever it is that you chose to save it. There are only four files (not including code-behinds) in the entire demo: Take a look at the markup in the two ASP.NET pages and notice the various tokens that appear throughout. Also take a look at each page's code behind files because you will see a token set in code to demonstrate that you can put a token anywhere and, as long as it is output to the page source, it is replaced by the token replacement mechanism. The code-behind for PageSpecificReplacementsPage.aspx also contains the RunPageReplacement override from Code Listing 2. When you run the demo application, you will see that the tokens are replaced with their respective values when the page appears in your browser. Make sure to observe the difference between the [$PAGESPECIFICTOKEN$] behavior between Default.aspx and PageSpecificReplacementsPage.aspx. Feel free to add new tokens, replacement values, and pages to the demo application to get a feel for how it all works. Conclusion When you need a token replacement mechanism, you should be well equipped with this solution. It gives you the ability to replace tokens regardless of whether they appear in ASP.NET controls, HTML controls, static HTML markup, code, or even a content management database. As long as you can get the token to render on the page, it can be replaced.
https://www.simple-talk.com/dotnet/asp.net/token-replacement-in-asp.net/
CC-MAIN-2014-15
refinedweb
2,667
54.63