text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Upgrade the main computer to Fedora 34. The upgrade was uneventful, but left me with a few little annoyances. It’s running Wayland, so there’s no longer a primary monitor. Also, something is slamming my processors and some programs are taking a long time to load. I was hoping things would be in better shape by this point in the life-cycle, but every once in a while there’s a buggier Fedora. Programming Update for June 2021 June was mostly Python, although I did do chapter 1 of Scratch 3 Games for Kids with Sam. He really, really enjoyed it and I anticipate doing the challenge problems and maybe chapter 2 in July or August. Books I read the intro and first couple chapters of both Flask Web Development, 2nd Edition and Data Visualization with Python and Javascript, both from a recent Humble Bundle. The Flask book may be useful for learning more about creating a non-Django site and, even if I mostly stick with FastAPI, it should provide some concepts that are applicable across both frameworks. With the data visualization book, I would love to use that to better visualize my annual Last.fm stats. Advent of Code While at my in-laws’ house, I completed days 13 and 14 of 2015’s Advent of Code. Back when I first started working on Advent of Code 2015, I went through all the problems and posted my first stab at a solution into a Google document. For day 13 I’d predicted that a modified version of the Traveling Salesman problem I’d done in 2015 Day 9 might work, using an asymmetrical matrix. That turned out to be exactly the right solution. In Ruby I learned how to do a Ternary expression. In the function “create_guest_hash” see the number = expression. require "../../input_parsing/parse_input" def create_guest_hash(lines) guest_hash = Hash.new lines.each do | line | people_and_values = line.scan(/(\w+) would (gain|lose) (\d+) happiness units by sitting next to (\w+)\./) if !guest_hash.has_key?(people_and_values[0][0]) guest_hash[people_and_values[0][0]] = {} end number = people_and_values[0][1] == "lose" ? "-#{people_and_values[0][2]}" : "#{people_and_values[0][2]}" guest_hash[people_and_values[0][0]][people_and_values[0][3]] = number end guest_hash end def create_matrix(guest_hash) index_hash = Hash.new guest_hash.keys.each_with_index do | key, index | index_hash[index] = key end matrix = [] index_hash.keys.each_with_index do |index| temp_internal_list = [] current_person = index_hash[index] index_hash.keys.each_with_index do |internal_index| if internal_index == index temp_internal_list.append(0) else temp_internal_list.append(guest_hash[current_person][index_hash[internal_index]].to_i) end end matrix.append(temp_internal_list) end matrix end def perfect_seating(happiness_graph, starting_person, number_of_people) vertex = [] (0...number_of_people).each do |number| if number != starting_person vertex.append(number) end end max_happiness = 0 permutation_list = vertex.permutation.to_a permutation_list.each do |permutation| current_happiness_weight = 0 outer_array_index = starting_person permutation.each do |inner_array_index| current_happiness_weight += happiness_graph[outer_array_index][inner_array_index] current_happiness_weight += happiness_graph[inner_array_index][outer_array_index] outer_array_index = inner_array_index end current_happiness_weight += happiness_graph[outer_array_index][starting_person] current_happiness_weight += happiness_graph[starting_person][outer_array_index] max_happiness = [max_happiness, current_happiness_weight].max end max_happiness end if $PROGRAM_NAME == __FILE__ guest_preferences = input_per_line('../input.txt') guest_preference_hash = create_guest_hash(guest_preferences) guest_preference_matrix = create_matrix(guest_preference_hash) starting_person = 0 total_happiness = perfect_seating(guest_preference_matrix, starting_person, guest_preference_matrix.length) puts "With the perfect seating arrangement total happiness is #{total_happiness}." end Day 14 turned out to be a very easy solution that was able to use what I’d come up with ahead of time without modification; for part 1, anyway. Part 2 became a bit of a mess that seemed to require create a Reindeer class in order to keep track of which Reindeer to award points to. Since I hadn’t done anything with Ruby classes yet (wasn’t really covered in the Ruby book I read with the kids), I haven’t yet done the Ruby or Perl solutions. You can see my Python solution here. Prophecy Practicum I made a bunch of quality of life improvements for my friend, aka The Client. I was pretty happy at my ability to continue to figure out how to make use of Django’s many built-in features. Extra Life Donation Tracker (eldonationtracker) I had to do a bug fix for a change in the way the API handled anonymous donations. Overall it wasn’t too hard. Sometime soon I intend to start separating out the Donor Drive API into its own project apart from the Extra Life code. Impractical Python After a long hiatus, I returned to Impractical Python and finished the Haiku chapter. It used Markov Chains and the CMU language module to help teach the computer what a valid Haiku was. Here is one of the Haikus it generated: listen: new year's bell! bell! standing still at dusk house listen in far in This next one is kind of nonsense, but I like it. Seems faux-zen: oh no! a white swan butterfly butterflies the only fluttering This one is closer to a real Haiku: let myself pretend my old sadness winter rain rain deepens lichened I’ll leave you with this last one which actually sounds like a human-written Haiku: closer, quilt and leaves, enfold my passionate cold! cold! dry cheerful bright MxPx – MxPx: You’re Never Too Old to Rock Last October I came back to MxPx, as I detailed in this post about how the lyrics for Friday Tonight led me to finally watch Friday. Over the past half year I would return to their self-titled album on Spotify. (I don’t subscribe to Spofity, but I do use the free tier to discover new artists) Eventually I decided to go ahead and buy the Deluxe version of the album directly from MxPx. It doesn’t appear they’re selling the CD anywhere else (at least it’s not on Amazon). For the most part, I found the music on the album to sound pretty similar to Teenage Politics, Life in General, and the harder punk songs on Slowly Going the Way of the Buffalo, The Ever Passing Moment, and Before Everything & After (although I hadn’t heard this album last yet at the time). Interestingly, back in 2013, I wrote about relistening to my older MxPx albums. While I did focus on age allowing for appreciation of a slower, less aggressive sound – I think the genesis of that blog post was the (potential) silliness of going back to kiddie lyrics as an adult. Now, much closer to 40 than I was then, I think it probably is even more of a truth. But, I think I was perhaps too harsh on the idea of hard rock itself. It’s just that, up to that moment, a lot of that style of rock hadn’t grown up yet. Most of the bands were hardly older than me and they were also writing towards what they hoped was a more lucrative audience. But now we’re all older and I believe the members of MxPx are about 5-7 or so years older than I am. So this album contains lyrics tackling more mature topics. Oh sure, there’s still Rolling Strong which can truly be best appreciated by other musicians. But songs like Let’s Ride, Life Goals, Pipe Dreams, and Moments Like This (especially Moments like This) definitely come from an older songwriter and resonate a lot with me as an older guy and a parent. And YET this is a hard rocking album. This is an album I love to put on while driving and just jamming on the highway. If the wife and I were driving around as much as we used to do before the pandemic, I’d love to get her addicted to some of these songs to jam out with me like we used to do with Anberlin. Let’s look at each track individually: - Rolling Strong – Just like Play It Loud on Before Everything & After, a song about being a band; this time about touring with hints of “don’t discount us because we’re getting old.” - All of It – Basically, a punk rock love song. The title comes from the chorus, “how much of your love do you think I need? All of it”. Most of the lyrics involve the singer talking about how he’s going to make the relationship a great one. - Friday Tonight – As I mentioned above, the song that got me interested in MxPx again. Definitely a rousing song to play at concerts, especially if they happen to be taking place on a Friday. The opening music is way calmer than the rest and a cause for a somewhat jarring move into the proper song. The lyrics are kind of nonsense, especially compared to the rest of the album. I mean, each verse has a coherent thought, but the song as a whole doesn’t really come together in any real way. Or rather, it’s basically poetic in the same way that sometimes some of the earlier Fall Out Boy songs are poetic; again each verse stringing together logically even if the whole doesn’t cohere as easily as a typical song. - Let’s Ride – a biographical song (or at least that’s the conceit) in which the singer tells his friend and/or partner that they should just experience the joy of driving and leaving things behind. It doesn’t seem to be a “screw the world we’re leaving it all behind” vibe. More like: let’s escape for a while and renew our batteries. - Uptown Streets – Hard to properly quantify, but I guess it’s a “don’t forget your city areas’ type of song. But I do love the cheekiness of this early verse: “I was truly scared, for the first nine years I ran/ It was a mighty fine day in the USA I bet/ Wrote it on the back of my hand so I wouldn’t forget” - 20/20 Hindsight – The lyrics are a bit vague, but it seems to be a singer lamenting what they will do once they can no longer be in a band. But can be generalized to anyone who’s losing the ability to do what they love. - The Way We Do – Essentially a song about MxPx being happy they have the life they have. They reminisce about some past tour pranks and express an overall gratitude that they get to do what they love for a living. - Life Goals – A somewhat sarcastic song about how life doesn’t always care about the goals you have. Pretty catchy chorus. - Pipe Dreams – Great symmetry with Life Goals it tells you to “hold onto those pipe dreams”. My favorite line in the song, “hold onto those pipe dreams/If I’m wrong/the worst is already happening”. The music has a happy tone and the lyrics just balance so well against the symmetry of the previous song. - Disaster – It sounds like it’s going to be a negative song or a sad song of unrequited love, but as best as I can tell, it’s essentially the song’s protagonist saying that the effect the object of his love/desire has on him is to leave him stunned. Basically seems like a hyperbolic statement to say disaster, but the rest of the lyrics seem to suggest things are fine, the protagonist is just SO in love. Also, I love what Herrerra does with the vocals on the lyrics: “I have the best time with you…everlasting”. I know it’s a technique I’ve heard on other punk songs (maybe other MxPx songs? I can’t remember) but I’d love the podcast “Why Do I Love This?” to analyze what it’s doing and why it tickles me so much. - Moments Like This – This song is the PERFECT ending to this album. It works so well that I’m *almost* disappointed that it goes on for four more songs in the Deluxe version I bought. I like ¾ of the added songs, but I just think this is the perfect closer for an album that is about punk rockers all grown up. They’ve been a band since high school and are now in their 40s and this song just works so well as the end of this album.It’s about savoring life as it happens. Every time I hear the song I want to make sure I spend more time with the family. - Best Life (Starting from here these are tracks exclusive to the Deluxe edition) – This one almost sounds more like a Aerosmith song or some other non-MxPx band. Also sounds like it could be in a commercial. It’s OK as a drinking song, but it’s my least favorite on this entire album. - The Band Plays As We All Go Down – referencing a (legend? truth?) about the Titanic, the song seems to oscillate between hope and despair. But a good return to an MxPx sound after the previous track. - Forget it All – A song about being frustrated and wanting to get it out of your system. A nice, fast beat – could be great for a concert. - That’s Life – Another track about getting older. Partially about the world changing around you and partially about dreams unachieved. It’s actually not a bad album-ending song, but it’s slightly sadder than Moments Like This. I like Moments Like This a bit better for the last track because it’s slightly more positive. (The next five songs are actually on a second CD if you buy the CD from their merch shop, so I consider them to be a separate acoustic EP) - Let’s Ride (Acoustic) – this one works well as an acoustic song - All of It (Acoustic) – this one’s OK acoustically, but I like the original better - Uptown Streets (Acoustic) – This one ALMOST works better in the acoustic version than in the original version - Rolling Strong (Acoustic) – I really don’t like this one as an acoustic song at all - Moments Like This (Acoustic) – this one works equally well in each version Django vs Flask vs FastAPI Over the 16 years that this blog spans I must have, at some point, mentioned that I believe sometimes we get knowledge when we’re not ready to receive it.This isn’t some spiritual or new age thing (although you’ll hear Chopra and/or Tony Robbins talk about the phenomenon). It’s simply my lived experience. Sometimes you come across some knowledge, but there’s some underpinning knowledge missing or maybe some life experience you don’t yet have to put your new knowledge into context. So sometimes this leads to a difficulty in learning the concept and other times you just don’t get the point of it and file it away or throw it away – no need to waste neurons on this! The same thing has happened to me with Python and Web Frameworks. I already mentioned this a bit in my March 2021 Programming update. But I wanted to elaborate on it in its own blog post. A bit of background – back in the late 90s and early 2000s I created lots of web pages using HTML and, sometimes, a bit of Javascript. Right around the time of HTMLX and other initiatives to get the web to be more interactive (what would eventually lead to Web 2.0 within a half decade), I moved on to using blogging platforms and mostly left web programming behind. Sure, I did a couple things here and there like my Vietnamese Zodiac page (which, at one point, was getting tons of hits on Google), but mostly I left the web behind. Some time in the last 5 years (or maybe more?) I looked at both Django and Flask, but found them inscrutable. Django, especially, just seemed overkill and a huge learning curve just to get started. I left them behind and continued to create command line and GUI programs (mostly with PyQT bindings). About 3 or so years ago someone from work approached me asking about a way to automate a part of his spiritual practice. It would clearly benefit from a web interface, but I remembered things being too hard with Flask and Django. Then COVID happened and I had lots of free time to learn things. I took a look at the books and videos I’d bought from various Humble Bundles. I tend to learn best from curated experiences like books and classes as opposed to just reading framework tutorials. I had a video class from Pakt focused on using Flask to create an API-based site. This would be one of the most common workflows in modern web design: an API backend written in Python, Ruby, Go, or Java interacting with a front-end written in Javascript. With this class as my guide, Flask finally made sense. I understood what routes were and how they interacted with the user. I gained a better understanding of how to write a site that would interact with a database. This was great! After taking the class I started working on my colleague’s site. After I had the database schemas set up, we started discussing the user workflow he anticipated. There was a lot of admin backend to construct and, once again, I stopped writing websites back before CSS and all that. Also, unfortunately, the class was focused on an exceedingly simple website and I used the Flask tutorial to try and shore up how to handle user logins and so forth, it all started getting way too complicated. I lost interest and the project floundered. Then, for some reason – I think I was flipping through my Python books – I decided to look at one of the Django books and found that Django comes with an admin interface built-in. And making changes to the admin pages are somewhat trivial. I was rejuvenated in my desire to help my colleague. So I started recreating the site with Django. There is indeed still a higher cognitive load to Django. The simplest Flask site with its app.route is still infinitely simpler than Django with its views, models, forms, settings, etc. And yet, if you’re already going to go for a more complex site and do your user interface in Python, suddenly all that complexity pays off. The ability to set up your models and forms with ease makes a model-heavy workload so much easier. And I had a better understanding of what all this meant now that I’d work on Flask. What I find most interesting, and what led me to write this post, is that I recently got O’Reilly’s Flask Web Development from another Humble Bundle. Just as having worked on Flask helped me better understand what Django is trying to do, having worked on Django is feeding back into helping understand Flask. Having worked with models and forms in Django has cemented in my head how web backends (particularly if written in Python) are thinking about the web and separation of concerns; it’s all forming this virtuous cycle. And so while I anticipate perhaps having some momentary screwups where I try to think Flask-y while working Django or Django-y while working on a Flask site, each is helping me become a better Python web developer no matter the framework being used. So, to the implicit question in my blog post (assuming you’re still reading after that preamble of sorts), I think you’ve probably guessed by now that I’m not an absolutist when it comes to these major Python web frameworks. Django, as I recently learned, was born to power a news website when the idea of a CMS was in its infancy. If you’re looking to create a website with a Python backend that needs a strong admin interface, strong database dependencies, and a more batteries included workflow – you’re going to want to use Django. If you just need an API that’s going to interact with mobile apps or Javascript, then (at my current level of understanding) it’s a toss-up between Flask and FastAPI. I think MOST apps nowadays use a database of some sort, but if you’re not using one, then you’re definitely better off using Flask or FastAPI instead of Django. They’re both barebones with a lower cognitive load to get started. Flask is older and may have more plugins and add-on packages to help with features you might want to add on. FasAPI is newer and Async native as well as getting some input validation “for free” thanks to Pydantic. It also has a really easy way to write Pytest Unit Tests. It’s almost too easy! FastAPI’s website maintains a page comparing it to alternatives as an explanation of why the original dev put it together. It seems written in a fair manner although it’s obviously going to be at least slightly biased in favor of FastAPI. As I said, I am just a baby dev when it comes to Python web frameworks, so I’m not the best person to comment on when to choose Flask over FastAPI, but I’d probably lean towards the latter if I were starting a site fresh. It really does a lot of great things with JSON automatically and is self-documenting. The important thing is not to be dogmatic about things. There are occasions where each is best. I LOVE FastAPI for my Civilization VI Play By “Email” webhook server. As I spent lots of digital “ink” above describing, Django works perfectly for my colleague’s web app. Using Django freed me from coding and maintaining all the admin code and database handling. And on the other side, my webhook server just needed to have some REST API points. Even if I eventually add database support, I still don’t need any kind of admin or real user account service with that app. So when you’re ready to write your next Python web app, think about your needs and think about what the strengths and weaknesses of all the frameworks are. MxPx – Before Everything & After: What if MxPx made a Good Charlotte album? (the. Programming Update for May 2021 Advent of Code 2015 Problem Set Day 10 There’s a lot to be said for doing the Advent of Code in December along with a few thousand other programmers. It’s fun to be part of something with others and you get to all the neat and wacky solutions that others come up with. On the other hand, going at my own page with the 2015 problem set allows for interesting little coincidences to form. What I did one day (when I was at about Day 7) was to go through all the remaining days and write some first impression ideas at solutions. This got my brain thinking about what I needed for each day. One day, before getting to the Day 10 problems, I was idly flipping through the book Learn You a Haskell for Great Good! As I was trying to decide if it would be one of the languages I’d add for 2016. I ended up coming across a built-in library that would have made solving Day 10 a real breeze. Day 10 is the Look and Say sequence. I’m sure by paying close attention to that Wiki entry I could have figured out an algorithm. But, basically I just needed to group together each repeated number and then I could take the length of that list/array to find out how many numbers. That becomes part of the new number. Unfortunately, as far as I could see, that functionality was not built into Python. So my Python code looked like this: """Find length of the a number after a look-and-say game.""" import copy def create_number_lists(game_input): current_number = 0 number_count = 0 return_list = [] for number in game_input: if len(game_input) == 1: return [(1, int(number))] if int(number) != current_number: if number_count != 0: return_list.append((number_count, int(current_number))) current_number = int(number) number_count = 1 else: number_count += 1 return_list.append((number_count, int(current_number))) return return_list def recombine(number_list): return "".join( str(number) for num_tuple in number_list for number in num_tuple ) if name == "main": puzzle_input = '1321131112' loop_count = 0 puzzle_output = "" while loop_count < 40: puzzle_output = recombine(create_number_lists(puzzle_input)) puzzle_input = puzzle_output loop_count += 1 print(f"The length of puzzle_output ({puzzle_output}) is {len(puzzle_output)}") Meanwhile look at how much cleaner this is in Ruby using chunk_while: def look_and_say_round(game_input) game_input_array = game_input.split(//) game_input_array.chunk_while { |a, b| a == b } .flat_map { |chunk| "#{chunk.size}#{chunk.first}" } .join('') end if $PROGRAM_NAME == FILE input = "1321131112" loop_count = 0 until loop_count == 40 puzzle_output = look_and_say_round(input) input = puzzle_output loop_count += 1 end puts "Length of output after 40 roundes of look and say is #{puzzle_output.length}" end Revisiting the code to write this blog post, I think I can probably reduce that code down to one line (in the look_and_say_round function) if I wanted to sacrifice a bit of readability. The Perl code falls somewhere in the middle. I easily found a CPAN module to do the equivalent of chunk_while. However, I’m not as conversant in Perl so I don’t know if I’m being a little extra verbose here: !/usr/bin/perl use v5.20; use warnings; use Array::GroupBy qw(igroup_by); sub look_and_say_round{ my $game_input = $_[0]; my @numbers = split("", $game_input); my $number_iterator = igroup_by(data => \@numbers, compare => sub{$_[0] eq $_[1]},); my $look_and_say_text = ''; while (my $grouped_array = $number_iterator->()) { my $length = @{$grouped_array}; my $number = @{$grouped_array}[0]; $look_and_say_text = $look_and_say_text . "$length$number"; } return $look_and_say_text;} my $puzzle_input = "1321131112"; my $loop_count = 0; my $puzzle_output; until ($loop_count == 40){ $puzzle_output = look_and_say_round($puzzle_input); $puzzle_input = $puzzle_output; $loop_count++;} my $output_length = length($puzzle_output); say "The final number is $puzzle_output with a length of $output_length"; Day 11 Day 11 wasn’t too hard for any language. It basically gave some password generation rules and required you to increment passwords in a certain way. However, there was a bit of elegance in the Ruby solution that I guess points towards part of the reason why folks tend to REALLY love this language. First, let me show you my Python code: import re def rule_one(password: str) -> bool: """Check if a password includes a straight of at least 3 letters.""" for index, letter in enumerate(password): if index == (len(password) - 3): return False current_letter_ascii_value = ord(letter) if ord(password[index + 1]) == current_letter_ascii_value + 1 and\ ord(password[index + 2]) == current_letter_ascii_value + 2: return True def rule_two(password: str) -> bool: """Check if a password contains i, o, or l.""" return 'i' not in password and 'o' not in password and 'l' not in password def rule_three(password: str) -> bool: """Check for at least two different, non-overlapping pairs of letters.""" rule_three_regex = re.compile(r'(\w)\1') all_pairs = re.findall(rule_three_regex, password) if len(all_pairs) < 2: return False if len(set(all_pairs)) > 1: return True def increment_password(password: str) -> str: """Given a password, increment the last letter by 1.""" if password[-1] == 'z': return increment_password(password[:-1]) + 'a' return password[:-1] + chr(ord(password[-1]) + 1) def find_next_valid_password(password: str) -> str: """Find the next password that meets all the criteria""" while True: if rule_one(password) and rule_two(password) and rule_three(password): return password password = increment_password(password) puzzle_input = "hxbxwxba" first_new_password = find_next_valid_password(puzzle_input) second_new_password = find_next_valid_password(increment_password(first_new_password)) print(f"Santa's first new password is {first_new_password}") print(f"Santa's second new password is {second_new_password}") Both rule_one and increment_password are a little inelegant. I spent a bunch of time trying to figure out a better rule_one with regex and I couldn’t. And for the password incrementing, I had to convert it to asii to get the next letter and then come back to string. Ruby, however, was so nice! def rule_one(password) characters = password.split(//) triple_straight = characters.chunk_while { |i, j| i.next == j }.filter{|chunk| chunk.length >= 3} triple_straight.length >= 1 end def rule_two(password) !password.include? "i" and !password.include? "o" and !password.include? "l" end def rule_three(password) pairs = password.scan(/(\w)\1/) pairs.length >= 2 end if $PROGRAM_NAME == FILE current_password = "hxbxwxba" puts "Santa's starting password is: #{current_password}" until rule_one(current_password) and rule_two(current_password) and rule_three(current_password) current_password = current_password.next end puts "Santa's next password should be: #{current_password}" puts "Then Santa's password expired again!" current_password = current_password.next until rule_one(current_password) and rule_two(current_password) and rule_three(current_password) current_password = current_password.next end puts "Santa's next password should be: #{current_password}" end First of all, the return of chunk_while allows rule_one to work SO, SO well. But the best part is that Ruby’s “something.next” method just works so perfectly both here and in the password incrementing code. This works especially well because it does the right thing where “az” increments to “ba”. It just is so beautiful for doing this and I love the resulting code! Looking back at this now, I see that there is some code there in the “main” loop which I could have refactored out into a function. Day 12 Usually part 2 of an Advent of Code problem is just a bit more complex, introducing complications that may mess with shortcuts or assumptions the programmer made in part 1. But every once in a while, the code needed for part 2 is radically different than the code for part 1. For part 1 I was able to use a simple regular expression to get the answer. The code is simple (as usual I’ve actually over-complicated things a bit with my code so that I can use unit tests to test against the examples given) import re from sys import maxsize, path path.insert(0, '../../input_parsing') import parse_input def find_numbers(line: str): regex = re.compile(r'(-*\d+)') numbers = re.findall(regex, line) return [int(number) for number in numbers] def sum_number_list(number_list: list[int]) -> int: return sum(number_list) if name == "main": lines = parse_input.input_per_line('../input.txt') total = sum([sum_number_list(find_numbers(line)) for line in lines]) print(f"The sum of all numbers in the document is {total}") But for part 2, I actually had to create a JSON parser of sorts. import json from sys import path path.insert(0, '../../input_parsing') import parse_input def find_numbers(elf_json): summation = 0 if isinstance(elf_json, list): for item in elf_json: summation += find_numbers(item) elif isinstance(elf_json, int): return elf_json elif isinstance(elf_json, str): return 0 else: for key, value in elf_json.items(): if "red" in elf_json.values(): return 0 elif isinstance(value, int): summation += value elif isinstance(value, list): summation += find_numbers(value) elif isinstance(value, dict): summation += find_numbers(value) else: # this is a color string summation += 0 # this used to be a print statement return summation if name == "main": json_string = parse_input.input_per_line('../input.txt') total = json.loads(json_string[0]) print(f"The sum of all numbers in the document (unless it's got a red property) is {find_numbers(total)}") Prophecy Practicum This is a project I’m working on for a buddy in order to fulfill a need he has with a spiritual practice he’s involved in. As I mentioned before, moving from Flask to Django allowed me to make lots of progress on the project. In May I finally reached the v1.0 milestone and it was ready for users to try out. As I write this in June, the first cohort has gone through it and I’m waiting to see what modifications we need to make. Additionally, I have a couple features I’m trying to figure out. ELDonation Tracker for Extra Life I started off May fixing a bug where DonorDrive had changed their API output and it messed up my Avatar image output. While investigating the API changes, I found out they had added in APIs for milestones, badges, and incentives. This led to the release of v6.1. Next up for this project will be v7.0 where I move the API code out to its own project so that it can be useful as a Python reference API for DonorDrive, independent of Extra Life. Harry Potter Word Frequency Calculator I had a conversation with Scarlett about how puzzle solvers (cryptographers) use word frequency to help solve their puzzles. And I told her that the most often used word in English is “the”. She was a little skeptical, so I wrote a program to do word frequency analysis on Harry Potter and the Philosopher’s Stone. Since it was my first time using my chosen epub library, I just took it one chapter at a time to get a better understanding of how it works. from collections import Counter import ebooklib from ebooklib import epub import re philosopher_stone = epub.read_epub("/home/ermesa/eBooks/J.K. Rowling/Harry Potter and the Sorcerer's Stone (441)/Harry Potter and the Sorcerer's Stone - J.K. Rowling.epub") items = philosopher_stone.get_items_of_type(ebooklib.ITEM_DOCUMENT) items_list = list(items) remove_html_tags_regex = re.compile(r'<[^>]+>') chapter_one = remove_html_tags_regex.sub('', items_list[6].get_body_content().decode()) chapter_one_words = chapter_one.split() print(Counter(chapter_one_words)) And here’s the output: Counter({'the': 187, 'a': 105, 'and': 95, 'to': 93, 'he': 87, 'was': 84, '.': 81, 'of': 72, 'his': 66, 'in': 55, 'He': 49, '—': 49, 'that': 48, 'on': 44, 'as': 41, 'had': 39, 'Dursley': 38, 'it': 36, 'have': 32, 'at': 32, 'Mr.': 30, 'said': 28, 'Professor': 28, 'be': 27, 'I': 27, 'al l': 25, 'were': 23, 'Mrs.': 21, 'you': 21, 'didn’t': 21, 'but': 21, 'out': 21, 'It': 21, 'been': 21, 'she': 20, 'for': 20, 'her': 19, 'they': 18, 'Dumbledore': 18, 'very': 17, 'people': 17, 'over': 17, 'into': 17, 'cat': 17, 'McGonagall': 16, 'with': 15, 'not': 15, 'The': 14, 'him': 14, 'Harry': 14, 'up': 13, 'this': 13, 'back': 13, 'if': 12, 'so': 11, 'it.': 11, 'about': 11, 'couldn’t': 11, 'down': 11, 'know': 11, 'their': 10, 'would': 10, 'could': 10, 'what': 10, 'never': 10, 'even': 10, 'them': 10, 'just': 9, 'man': 9, 'there': 9, 'no': 9, 'like': 9, 'somethi ng': 9, 'thought': 9, 'by': 9, 'see': 9, 'owls': 9, 'looked': 9, '“I': 9, 'Dumbledore,': 9, 'Privet': 8, 'did': 8, 'think': 8, 'Potter': 8, 'seen': 8, 'street': 8, 'around': 8, 'little': 8, 'who': 8, 'eyes': 8, 'are': 8, 'number': 7, 'because': 7, 'which': 7, 'Dudley': 7, 'got': 7, ' corner': 7, 'when': 7, 'from': 7, 'next': 7, 'can': 7, 'he’s': 7, 'Hagrid': 7,... it keeps going, but I've cut it off here It’s actually not that surprising to see “Harry” so low in chapter one. It’s more about describing the nature of Privet Drive and Dumbledore and the other wizards than it is about Harry. I may do a bit more with this code to create some fun graphs. If you want to expand on this, the first thing you’d want to do is force every word to lowercase before counting it so that ‘a’ and “A” are counted as the same word. DragonRuby I took a look at the idea of participating in the DragonRuby game jam. But it’s WAY more work than using Unity or Unreal so I chose other tasks over this one. But it was neat to play around with and I may return to it someday. Podcasts I’m Listening to in 2021. (Approx 30 min long) Codebreaker: A tech podcast. Season 1 asked the question “Is it Evil?” of various technologies. still on my feed, but hasn’t release a new episode since Dec 2016. tends to have a heavy) Planet Money – my dad had been trying to get me to listen to this for a while and I finally did. 21-25 minutes on an economic topic that usually helps you see something about it in a slightly different way. The Experiment – A team-up between The Atlantic and WNYC Studios that explores topics about American culture and history. Somewhat similar to Throughline. (Approximately 20 minutes) The Indicator from Planet Money – A quick episode on a news-related financial story. Usually somewhat jokey in tone. (Approximately 9 minutes) Throughline – A podcast that ties current social and political events to history. (Approx. 1 hour))) still on my feed, but it’s been 21 months since the last episode FiveThirtyEight Elections – a great, wonky podcast from the guys that brought you the most accurate election predictions. Has continued beyond the elections due to the odd circumstances of the Trump Give Me Fiction – note: I’m still subscribed to this podcast, but it’s on hiatus..) Smartless – Jason Bateman, Sean Haeys, and Will Arnett interview a celebrity. It’s similar to Conan O’Brian Needs a Friend or WTF, but tends to go off on tangents a lot more often. (Approx 1 hour) Writing Excuses – Brandon Sanderson and other famous SFF authors talk about the craft of writing. Even though I’m not an author, I find it fascinating. (Approximately. Star Talk Radio – Neil DeGrasse Tyson’s official podcast feed. Some episodes are a show hosted by him in which he either interviews a guest or answers listener questions. Others are Chuck Nice and another guy talking about the science of sports... Open Source Security Podcast – Approximately 30-45 minutes on a security topic that’s either on the minds of one of the hosts or in the news recently. Fun to listen to and I usually learn something. The Real Python Podcast – A podcast to go along with The Real Python website. They cover new articles on the site and then a weekly topic. (Approximately 1 hour) Cooking Proof – a short podcast by the folks at America’s Test kitchen that looks at various food culture stories. Previous episodes include Fair Foods, Bowls, and Ketchup. (usually about 15-20 minutes) Programming Update for April 2021 I. Sam’s Thought Processes As I’ve mentioned on this blog before, I always find it fascinating how the kids interpret the world. Here are a couple examples of how Sam is seeing things at this point. LEGO Ban Last weekend I was sitting at the computer, working on some code when my wife came into the room. “Did you tell Sam that he couldn’t play LEGOs?” “Of course not,” I retorted. “Did you tell him he couldn’t be in the basement, then?” “Nope” After a couple more questions, she cuts to the chase. “What exactly did you tell Sam when he came in to see you?” “I told him no more screen time.” Sam had been playing video games for a while as well as watching some TV. Well, now that LEGO doesn’t include instruction manuals for their large sets, you have to look up the instructions online. So he interpreted no screens as meaning he couldn’t use the tablet to look up the instructions for his LEGOs. Instead of asking if the tablet was OK for LEGOs, he was just complaining that I was banning him from LEGOs, leading to the confusion. Peep and Egg Now that the twins can read, they’ve been re-reading some of the books we’d previously read to them. One of the ones they’ve been returning to is called Peep and Egg: I’m not Hatching. It has a series of pages like these: The takeaway is supposed to be that the egg is comfortable with the way things are, but could be having so much more fun if they’d come out of their comfort zone. Sam’s told us that the lesson is that Peep should learn to be more patient with Egg and let him do things at his own pace. Mozilla’s Legacy A few days ago I read this article over at Tech Republic about how, Mozilla’s greatest achievement is not Firefox, but the Rust programming language. They point to Firefox’s declining numbers in the face of Chrome and Chromium-based browsers and I’m inclined to agree with the author. There is, of course, a kind of poetry to this. Although Netscape was one of the first dot-com companies and beat Microsoft to the punch at creating the first mainstream web browser, it’s not Netscape Navigator which is its greatest legacy. Instead it’s spinning off into Mozilla and, the most poetic part, the creation of the Javascript programming language. (Javascript was written in just a week and a half and this episode of Red Hat’s Commandline Heroes podcast does an excellent job documenting it) It’s also not unprecedented in the tech world – Bell Labs, the research arm of AT&T’s greatest contributions to tech have nothing to do with telephones. Their researchers invented transistors (basis of modern computers), the C programming language (pretty much every operating system and most video games until recently were written in C), the UNIX operating system (between UNIX and its kinda-descendent Linux – a good portion of the research computers and the Internet run on these OSes), and many other technologies. So it wouldn’t be the worst legacy for Mozilla to have. Rust, in case you’re unfamiliar, is a low-level language like C but it has safety mechanisms that should help reduce the number of bugs that can be exploited by the bad guys – at least at the operating system and driver level. Google has indicated it will begin using Rust in Android. Linus Torvalds has mentioned he’s not opposed to it in the Linux kernel. I believe I’ve also seen articles saying that Microsoft is considering using it for future versions of Windows. I don’t think it would be a good thing for us to end up back in a place where there’s only one browser – Chromium and its clones, but if Firefox dies, at least Mozilla will have given the world Rust. (Also, Firefox was originally called Phoenix… just as it rose from Netscape’s ashes, maybe something else would rise from Firefox) OMG: All Your Base is 20 Years Old As I was going through my feed reader recently, I came across an article from Ars Technica, that I’d skipped over when it first came out, which announced that the All Your Base Meme is now 20 years old. I couldn’t believe it. It was my first meme, a few years before Numa Numa would be the meme that crossed over into regular pop culture. It led me to The Laziest Men on Mars’ page on Mp3.com, which was a kind of proto-Bandcamp in the early 2000s where indie bands (and some commercial bands) would put up MP3s to gain followers. (This ended up being my favorite song from The Laziest Men on Mars) Here is the video that took over all of us on the Internet 20 years ago: Yeah, I’m sure you’re wondering how in the world that was a thing anyone cared about. But, much in the same way that I wonder how anyone could ever enjoy most of early cinema, it was a different time on the Internet then. A time when putting together an amateur video and getting it to everyone was just starting to be feasible. And it got so embedded into my brain that about six so years ago I got the following shirt when I went to the Defcon conference: Yeah, there’s a lot of geekiness contained in that one shirt. Anyway, hope you enjoyed that trip down memory lane or, if you’re a young millenial or Gen Z are shaking your head at what we late Gen X/early millenials found entertaining. Programming Projects: March 2021 I started off the month thinking it was going to be Python heavy and ended up doing a lot more micro-controller programming. To be fair, I was mostly programming in CircuitPython, but it definitely came out of nowhere. Python Civilization VI Webhook in FastAPI Last month I created a webhook program in Flask to generate notifications when it was someone’s turn in our multiplayer Civilization games. When I posted about it on reddit, someone suggested that I would be better off using FastAPI instead of Flask. I’d been hearing about FastAPI for months on Talk Python to Me and Python Bytes, but I always got the impression that it would only be useful if I was writing my website to be async. Instead what I got was a much cleaner interface, an automated documentation system, and better use of types to validate data. I’m only using a fraction of what’s nice about FastAPI and I find it much nicer to use for this use case than Flask. After I got it working as well as when I was using Flask, I started working on enhancements and bug fixes. The work that drove me nuts until I figured it out was how to clean up my code (that is, refactor to reduce code reuse) and still maintain the ability to not have duplicate notifications from the Civilization Play By Cloud service. It turned out that I needed to be faster than 0.01 seconds to handle the code. So I had to write the new turn to the dictionary that’s functioning as a sort of bootleg database BEFORE posting to Matrix. It was quite satisfying to figure that out. I then focused on moving some items (like usernames) out to config files so that it would be easy for someone to take my code and use it for themselves. I’m almost done with that. Afterwards I’ll create a PyPi package. Django 2 by Example and Prophecy Practicum Django Rewrite On the other end of the Python web frameworks, I also did some work with Django. When I first tried to learn Django a few years ago it was just too complex for me to understand. I was trying to learn too many concepts at once. Now that I understand decorators and, thanks to my work with Flask and FastAPI, the concepts of routes, it’s a lot easier for me to learn Django. The MVC framework still separates things a lot, causing a higher cognitive load. However, I can see the beauty in how it separates things and makes it easier for teams to work on the code. Once I had enough of the basics under my belt, I took another look at the Prophecy Practicum app I was writing for a friend. My work on creating it with Flask ground to a halt under the weight of all the extra work I was doing to develop things like an admin view. Eventually, I realized that Django may be a heavy framework for the way a lot of web development is done nowadays (feed APIs to phone apps and Javascript libraries), but for what I wanted to do, it would reduce a lot of the work I was doing because it came with those libraries built in. So I’ve been able to almost get to a working first stab at things with only a couple hours’ work. It’s a lot nicer to focus on what I want the app to do rather than having to build up a bunch of scaffolding. Also, I love the way it handles creating databases from classes. CircuitPython Play Dough Piano When reading through the Adafruit Python on Microcontrollers mailing list, I found out about a Play Dough Piano (unfortunately I couldn’t figure out how to download that presentation). I copied their CircuitPython code since I don’t have any experience with CircuitPython HID. But I developed the Scratch code on my own thanks to all the experience I gained last year with Scratch. I also used my recent experience with soldering to add the pins to the QTPy. If you decide to do this yourself, it’s important to note that you cannot use a breadboard because the capacitive touch pins on the QTPy reads as a key being pressed. The kids had a lot of fun and it finally gave me an idea of what to make with a QTPy. I bought 2 of them when they were half off during John Park’s show on Adafruit. The code is available here. QTPy Streamdeck Working on the Circuit Python Piano with Sam combined with seeing someone make a Streamdeck using Cherry MX keys make me consider that perhaps I could use my QTPy to create a Streamdeck. I’d previously been focused on recreating the ElGato Streamdeck, so I was going to use a PyPortal. However, a PyPortal is $50-ish and I don’t have a 3D printer to make an enclosure. Meanwhile I already had the QTPy and some buttons. I didn’t need the buttons to change or have images on them, so I went ahead and created a nice, simple Streamdeck. My first version only worked if I had OBS in focus which defeated the purpose. I was using key.send. Afterwards speaking to someone in the Adafruit Discord, I learned that I had to do press and release. Now it works awesomely and I can start recording and pause/unpause the recording without leaving the game. The repo is available here. Microsoft MakeCode Sam’s car I got tired of waiting for the BBC Micro:Bit V2 to be in stock at Adafruit. It’s the same price as the BBC Micro:Bit V1, but with more features, so it felt silly to get the earlier version. But I got tired of not being able to build the CuteBot with Sam, so I bit the bullet and bought the version 1. It was very easy to program in Microsoft MakeCode because there are special blocks for programming the car. We were able to program the car to “dance like a bee”, avoid obstacles, and follow a path. The latter was particularly impressive to the wife. The kids were amused that their stuffed penguins were invisible to the car’s eyes. That is, when in obstacle avoidance mode, it would still crash into the penguins. I thought that was a little odd myself. But then I did some research and realized the “eyes” were an ultrasonic sensor. Since it’s using echo-location, it doesn’t work on furry objects – the same reason that padding is used to improve the acoustics in a room by dampening the noise. The object needed to be reasonably solid to let the car know it was there. I still have some more tinkering to do with Sam to add some more options to the examples the car’s manufacturer printed in the manual. C# GameDev.Tv Multiplayer class I finished up the online multiplayer video game class I was taking through GameDev.Tv. As I’ve said in previous updates – I now truly appreciate the work that goes into an online multiplayer game. It’ll be a while before I add multiplayer to the game I’ve been working on, but I’ll definitely be studying what I did in this class when the time comes. I highly recommend it if you’re considering making an online multiplayer game for the first time – especially if you want to do it over Steam. Happy Pi Day, Everyone! My Extra Life Donation Tracker reaches v6.0 (feature complete) This. Looking back at a Year of COVID-19.
https://www.ericsbinaryworld.com/
CC-MAIN-2021-31
refinedweb
8,513
69.01
Jeff, You can wrap the function in a macro that checks for truncation: #include <stdio.h> #define bar(j) foo(sizeof(j) > sizeof(int) ? -1 : j) static void foo(int j) { printf("foo(j) = %d\n", j); } int main(int argc, char *argv[]) { /* 8589934592LL == 2^33 */ long long i = 8589934592LL + 11; foo(i); bar(i); return 0; } foo(j) = 11 foo(j) = -1 ~Jim. On Wed, Aug 7, 2019 at 9:59 AM Jeff Squyres (jsquyres) via mpi-forum < mpi-forum@lists.mpi-forum.org> wrote: > SHORT VERSION > ============= > > Due to the possibility of silently introducing errors into user > applications, the BigCount WG no longer thinks that C11 _Generic is a good > idea. We are therefore dropping that from our proposal. The new proposal > will therefore essentially just be the addition of a bunch of > MPI_Count-enabled "_x" functions in C, combined with the addition of a > bunch of polymorphic MPI_Count-enabled interfaces in Fortran. > > MORE DETAIL > =========== > > Joseph Schuchart raised a very important point in a recent mailing thread: > the following C/C++ code does not raise a compiler warning: > > ----- > #include <stdio.h> > > static void foo(int j) { > printf("foo(j) = %d\n", j); > } > > int main(int argc, char *argv[]) { > /* 8589934592LL == 2^33 */ > long long i = 8589934592LL + 11; > foo(i); > return 0; > } > ----- > > If you compile and run this program on a commodity x86-64 platform, a) you > won't get a warning from the compiler, and b) you'll see "11" printed out. > I tried with gcc 9 and clang 8 -- both with the C and C++ compilers. I > even tried with "-Wall -pedantic". No warnings. > > This is because casting from a larger int type to a smaller int type is > perfectly valid C/C++. > > Because of this, there is a possibility that we could be silently > introducing errors into user applications. Consider: > > 1. An application upgrades its "count" parameters to type MPI_Count for > all calls to MPI_Send. > --> Recall that "MPI_Count" already exists in MPI-3.1, and is likely of > type (long long) on commodity x86-64 platforms > 2. The application then uses values in that "count" parameter that are > greater than 2^32. > > If the user's MPI implementation and compiler both support C11 _Generic, > everything is great. > > But if either the MPI implementation or the compiler do not support C11 > _Generic, ***the "count" value will be silently truncated at run time***. > > This seems like a very bad idea, from a design standpoint. > > We have therefore come full circle: we are back to adding a bunch of "_x" > functions for C, and there will be no polymorphism (in C). Sorry, folks. > > Note that Fortran does not have similar problems: > > 1. Fortran compilers have supported polymorphism for 20+ years > 2. Fortran does not automatically cast between INTEGER values of different > sizes > > After much debate, the BigCount WG has decided that C11 _Generic just > isn't worth it. That's no reason to penalize Fortran, though. > > -- > Jeff Squyres > jsquy...@cisco.com > > _______________________________________________ > mpi-forum mailing list > mpi-forum@lists.mpi-forum.org > > _______________________________________________ mpi-forum mailing list mpi-forum@lists.mpi-forum.org
https://www.mail-archive.com/mpi-forum@lists.mpi-forum.org/msg00415.html
CC-MAIN-2019-39
refinedweb
512
63.49
going to learn about how can we create a rounded or circular button in kivy using canvas. You must have a very clear concept of canvas, button and their properties to learn how can you make buttons like this. As we know canvas is the root object used for drawing by a Widget. Each Widget in Kivy already has a Canvas by default. When you create a widget, you can create all the instructions needed for drawing. To use canvas you must have to import the graphics in your file. from kivy.graphics import Rectangle, Color Some important properties used in this article – border : 1) Border used for BorderImage graphics instruction. Used with background_normal and background_down. Can be used for custom backgrounds. 2) It must be a list of four values: (bottom, right, top, left). 3) border is a ListProperty and defaults to (16, 16, 16, 16) Button behavior : 1) The ButtonBehavior mixin class provides Button behavior. 2) You can combine this class with other widgets, such as an Image, to provide alternative buttons that preserve Kivy button behavior. Basic Approach - -> import kivy -> import kivy App -> import widget -> import Canvas i.e.: from kivy.graphics import Rectangle, Color -> set minimum version(optional) -> Extend the Widget class -> Create the App Class -> create the .kv file: -> create the button using the canvas -> Use border property to give them a circular shape. -> Add action/callback if needeed -> return a Widget -> Run an instance of the class Kivy Tutorial – Learn Kivy with Examples Implementation of the Approach .kv file Output: Note: Widgets are still rectangles. That means that even if you click on the rounded corners, the button still receive the event. Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics. To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course.
https://www.geeksforgeeks.org/circular-oval-like-button-using-canvas-in-kivy-using-kv-file/
CC-MAIN-2021-10
refinedweb
309
64.51
Python: This example shows the most important part of type hinting: it is optional. Python hasn’t changed to suddenly require static typing. But if we add a little more: : Alas, type hinting now warns us that the first argument has to be a string: Fortunately there are several ways to solve this, each useful in various circumstances. For example, we could use regular Python and make this a keyword option with a default:: The Union type from typing, as shown here, lets the type be from any of the provided type values. In this case, str or bool. Here, though, is the approach that best conveys the meaning of optional:: No type hinting. Then, imagine a second file, in the same directory, named greeting.pyi:. For those of use developing using Python 3.4, but still wanting to use the typing module, will PyCharm perform all of the assistance outlined above if we install a backported version of typing module ourselves? Assuming PyCharm will work as outlined above with Python 3.4 & the typing module , there (of course) remains the problem that Python 3.4 does not have the typing module in the standard library. When distributing code designed to function in both Python 3.4 & 3.5 that imports the typing module, my initial thought to work around this problem is to install a copy of the typing module alongside my own code’s modules (giving it a different name, e.g. localtyping), and then importing that only on an ImportError exception when trying to “import typing”. Does that strike you as a wise strategy? PyCharm supports the typing module from PyPI for Python 2.7, Python 3.2-3.4. For 2.7 you have to put type hints in *.pyi stub files since function annotations were added in Python 3.0. I believe it’s OK to specify typing as a conditional dependency ( sys.version_info < = (3, 5)) in your setup.py or in a special version of requirements.txt for Python < 3.5. One big part of PyCharm that would help with creating proper type hinting and proper documentation is _still_ broken:! Has been since the release of 5.0, please fix it. Thanks, we’re working on it. I’m curious if your team had considered implementing a way to rip all of the annotations from a file and put them into a .pyi stub in one easy c0mmand. It doesn’t seem like it would be too difficult to implement, and would be valuable when inevitable complaints about verb0sity come. If the response to that can be “sure, one sec … okay done, rebase with master” I imagine it could quell a lot of the opposition to usage of this feature. Please send a feature request to our issue tracker. I created the issue: +10 on that!!! +10 on that from me too Assuming we have type hinted some (or all) of our code, can pycharm provide a “report” for type errors project-wise? How different is from mypy’s type checker? You can run any subset of the code inspections offered by PyCharm via “Code | Inspect Code” menu. Type checker is only one of many inspections that are based on the type information found in the source code. You can browse the list of the inspections in “File | Settings | Editor | Inspections”. Mypy is focused on just checking the types and a few extra things. Any chance to build a more fast IDE? Since version 4.0 get more and more slow in my Mac!! Even change the options on vmoptions file and with I5 with 8G RAM. I tried these version 5 and same “problem”… We’re constantly working on performance improvements. Bug fix updates make PyCharm 5 faster. Please try our latest EAP What are the differences between using this syntax and using eg “@param” in a docstring? (AFAIK this syntax is easy to interact with at runtime, though I guess you *can* parse a docstring at runtime; and this syntax is standard so it should have wider support among other third-party tools – anything else?) The syntax you can use in @paramtags inside docstrings isn’t specified by any standard so the tools are not compatible with respect to type hints. PEP 484 defines the standard type hinting syntax for Python that you can rely upon. We do our code analysis and type checking statically, i. e. based almost only on the source code. A few runtime things we do is evaluate sys.pathin order to find the installed libraries and record the types of expressions while debugging if the “Collect run-time type information for code insight” option is set. > But the worry is that if type hinting becomes “best practice”, library authors will be pressured into supporting it. Meh. PEP8 has became a “best practice” years ago. Nevertheless, there is a plenty of projects out there which ignore PEP8 completely. Now they’ll just have one more thing to ignore. Nitpick: We don’t want to turn Python into a strongly-typed language. But Python is strongly-typed (As opposed to e.g. C, which has static weak typing) I concede your picked nit. I should have used jargon from the PEP. For those that don’t follow: The introduction should have said “We don’t want to turn Python into a *statically* typed language.” instead. I don’t think it really nit-picking. Its plain wrong and statements like those help keep the confusion up for newcomers, making it worth to change a single. Oh my, I guess that’s why its in the #1 spot of the Python FAQ: You are right, I just changed the article from strongly-typed to statically-typed. not sure its bug or feature but pycharm 5.0.2 don’t complain about wrong return type i.g. “def greeting() -> str: return 1” produce no type hint warning. Hmm, you’re right, looks like a 5.0.2 regression. Can you file a bug report? If not, let me know and I’ll file it. that would be very nice when you make a bug report I.e., Python3 code with perfectly implemented type hints would only make it easier on the developer, and not the interpreter compiler, yeah? The inspections in 5.0.2 complain about valid code: it wants typing to be added to requirements.txt and says that Lists getitem is not implemented, like what _ Hi Gabor, sorry, I missed replying to this. Are you still having this issue? Any chance you can implement support of MyPy style type hinting for Python 2.7? Basically it’s based on PEP-484, so the inline type-comments are the same as PEP-484 (and PyCharm already understands them), and methods annotations are also defined in comments: def get_users(user_ids): # type: (Iterable[int]) -> List[User] .... Thanks! Oh, by the way, after reading PEP-484, I’ve realized that the syntax I mentioned above is actually part of PEP-484: However, it doesn’t look like PyCharms supports this. Could you please add this syntax as well? Thanks! Hi Evgeny. Here is a ticket where this was discussed. Hi Paul, Thank you for responding. The ticket you mentioned is an old ticket that was created before the syntax was standardized. However, I’ve found a related new ticket: And I really hope to see this implemented soon It’s now working in today’s EAP. Wow, great news! Is there a way to try it with IDEA? By the way, I am not getting any email notifications about replies posted to my comments here – is it correct? What if we want to create stub files for libraries? I can’t imagine create .pyi files and putting them in your site_packages directory is very maintainable. How do I turn this off. We have our own annotation system, and pyCharm now trips all over the place. PEP484 is provisional and optional; but our type checker causes all sorts of “shadowing” errors when types are mentioned in strings. Can we turn of trying to parse those strings? Simply turning off the TypeChecker does not stop it. Hi Charles, this sounds like something we haven’t really anticipated. Can you add a feature request ticket at ? In my machine Annotate is not working so can you guys please suggest me on this?? Sorry to hear that. Can you describe the steps you took when trying to use it? I am curious if I’m overlooking something as well: So, it is working in that I can see the properties/methods of the type hinted class. However if I set self.client to a string, I’d expect it to yell at or warn me. As for the first case, I don’t think that type checker error here would make much sense, since there is no rule that requires that class attributes initialized in “__init__()” and respective parameters should have identical names. I mean, the annotation “TelnetClient” for the parameter “client” doesn’t imply that “self.client” should have the same type. As for the second question, you’re right, it’s a known problem with augmented assignments for strings — PY-6426. It’s pretty old, but, no doubt, we’re going to address it in upcoming releases. Thanks for reminding us of it! Well, it appears type checking works in every other scenario: calling the method and passing in the wrong type (as demonstrated in this blog post) and instantiating a class. So: def say(msg: str): return(msg) say(1) yells at you, which is good def say(msg: str): msg += 1 return msg The above does not alert you of anything, however. Maybe there’s a good reason for this, though. Hi ! Is there any plan to support annotation like def main(my_param: 'A beautiful param'): pass as suggested in The goal is not to use its, just do not display an error when doing that in PyCharm, because somes libraries use its (like begins for example) In PEP 484 they describe a way to suppress type checking in such cases using “@no_type_check” decorator or “# type: ignore” comments. Also there is special “@no_type_check_decorator” decorator that the author of “begin” library could use for “begins.start”. Unfortunately, none of these methods are supported in PyCharm at the moment. Please feel free to leave a vote for PY-17044, in particular. Type checking on class member variables doesn’t seem to work. PyCharm (Community Edition, 2016.2.3) does not point out any type mismatch in the following code: class TestClass: x = None #type:int t = TestClass() t.x = “foo” I read the chain above started by Evgeny regarding this style of type hinting (introduced in PEP-484), to which Paul Everitt responds with the link Is it true that PyCharm hasn’t made any progress on this since Nov 2015? If so that strikes me as rather sad. MyPy is very active at the moment, with Guido seeming to be leading the way on that project, perhaps you should consider integrating MyPy with PyCharm instead of re-implementing type-checking in your product and being behind the curve. Also, PEP 526 seems to be a superior spec for annotating class members, and MyPy already seems to be heading in that direction. It would be good to see PyCharm keeping up with this: Actually comments with type hints were supported long ago. You can see that PyCharm is able to infer the type of “TestClass.x” because there are code completion on this attribute and warnings if you try passing it to some function that expects argument of type other than “int” or accessing any method that doesn’t belong to “int”. The actual problem here is that “Type checker” inspection is a bit more lenient than mypy, In particular, it’s allowed to redefine variable by assigning value of another type to it (even when type annotation is present). We’re planning to implement stricter mode closer to mypy in future, though. We indeed could create a dedicated inspection to run mypy in batch mode when you run “Inspect Code…”, but it’s going to work well only there. For instance, mypy is not able to handle incomplete, syntactically invalid code as our parser does. Neither can it perform type checks incrementally, when type information for unmodified files is persisted and it’s not necessary to parse and infer types for these files all anew every time. The later limitation is critical from performance point of view in the IDE, since most refactorings, inspections, intentions and “find usages” rely heavily on type inference engine to operate properly. Unfortunately, it also makes it difficult to quickly integrate recent updates in “typing” specification in PyCharm. As for PEP 526, it is going to be available in PyCharm 2016.3 with initial support of variable annotations included in the first EAP. Pingback: I like Python! Concessions of a Java Developer
https://blog.jetbrains.com/pycharm/2015/11/python-3-5-type-hinting-in-pycharm-5/?replytocom=270924
CC-MAIN-2018-43
refinedweb
2,163
73.37
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Couchdb Wiki" for change notification. The "Replications_and_conflicts" page has been changed by BrianCandler. The comment on this change is: Initial version. -------------------------------------------------- New page: = Replication and conflict model = Let's take the following example to illustrate replication and conflict handling. * Alice has a document containing Bob's business card * She synchronizes it between her desktop PC and her laptop * On the desktop PC, she updates Bob's E-mail address. Without syncing again, she updates Bob's mobile number on the laptop. * Then she replicates the two to each other again The question is, what happens to these conflicting updated documents? == CouchDB replication == CouchDB works with JSON documents inside databases. Replication of databases takes place over HTTP, and can be either a "pull" or a "push", but is unidirectional. So the easiest way to perform a full sync is to do a "push" followed by a "pull" (or vice versa). So, Alice creates v1 and sync it. She updates to v2a on one side and v2b on the other, and then replicates. What happens? The answer is simple: ''both'' versions exist on both sides! {{{ DESKTOP LAPTOP +---------+ | /db/bob | INITIAL | v1 | CREATION +---------+ +---------+ +---------+ | /db/bob | -----------------> | /db/bob | PUSH | v1 | | v1 | +---------+ +---------+ +---------+ +---------+ INDEPENDENT | /db/bob | | /db/bob | LOCAL | v2a | | v2b | EDITS +---------+ +---------+ +---------+ +---------+ | /db/bob | -----------------> | /db/bob | PUSH | v2a | | v2a | +---------+ | v2b | +---------+ +---------+ +---------+ | /db/bob | <----------------- | /db/bob | PULL | v2a | | v2a | | v2b | | v2b | +---------+ +---------+ }}} After all, this is not a filesystem, so there's no restriction that only one document can exist with the name /db/bob. These are just "conflicting" revisions under the same name. Because the changes are always replicated, the data is safe. Both machines have identical copies of both documents, so failure of a hard drive on either side won't lose any of the changes. Another thing to notice is that peers do not have to be configured or tracked. You can do regular replications to peers, or you can do one-off, ad-hoc pushes or pulls. After the replication has taken place, there is no record kept of which peer any particular document or revision came from. So the question now is: what happens when you try to read /db/bob? By default, CouchDB picks one arbitrary revision as the "winner", using a deterministic algorithm so that the same choice will be made on all peers. The same happens with views: the deterministically-chosen winner is the only revision fed into your map function. Let's say that the winner is v2a. On the desktop, if Alice reads the document she'll see v2a, which is what she saved there. But on the laptop, after replication, she'll also see only v2a. It could look as if the changes she made there have been lost - but of course they have not, they have just been hidden away as a conflicting revision. But eventually she'll need these changes merged into Bob's business card, otherwise they ''will'' effectively have been lost. Any sensible business-card application will, at minimum, have to present the conflicting versions to Alice and allow her to create a new version incorporating information from them all. Ideally it would merge the updates itself. ==. So imagine two users on the same node are fetching Bob's business card, updating it concurrently, and writing it back: {{{ USER1 -----------> GET /db/bob <----------- {"_rev":"1-aaa", ...} USER2 -----------> GET /db/bob <----------- {"_rev":"1-aaa", ...} USER1 -----------> PUT /db/bob?rev=1-aaa <----------- {"_rev":"2-bbb", ...} USER2 -----------> PUT /db/bob?rev=1-aaa <----------- 409 Conflict (not saved) }}} User2's changes are rejected, so it's up to the app to fetch /db/bob again, and either: * apply the same changes as were applied to the earlier revision, and submit a new PUT * redisplay the document so the user has to edit it again * just overwrite it with the document being saved before (which is not advisable, as user1's changes will be silently lost) So when working in this mode, your application still has to be able to handle these conflicts and have a suitable retry strategy, but these conflicts never end up inside the database itself. == Conflicts in batches == There are two different ways that conflicts can end up in the database: 1. Conflicting changes made on different databases, which are replicated to each other, as shown earlier. 2.). They also replicate together as an atomic unit.", ...}, ... ] } }}} = Working with conflicting documents = == HTTP API == The basic `GET /db/bob` operation will not show you any information about conflicts. You see only the deterministically-chosen winner, and get no indication as to whether other conflicting revisions exist or not. If you do `GET /db/bob?conflicts=true`, and the document is in a conflict state, then you will get the winner plus a _conflicts member containing an array of the revs of the other, conflicting revision(s). You can then fetch them individually using subsequent `GET /db/bob?rev=xxxx` operations. As far as I can tell, from the list of _conflicts you cannot fetch all those versions in one go with a multi-document fetch (_all_docs). They have to be individual GETs. Your application can then choose to display them all to the user. Or it could attempt to merge them, write back the merged version, and delete the conflicting versions - that is, "resolve" the conflict. If you are merging multiple conflicts into a single version, you need to delete all the conflicting revisions explicitly. However a single _bulk_docs update can write the new version and simultaneously delete all the other ones. == View API == Views also only get the winning revision of a document. However they do also get a _conflicts member if there are any conflicting revisions. This means you can write a view whose job is specifically to locate documents with conflicts. If you do this, you can have a separate "sweep" proces which periodically scans your database, looks for documents which have conflicts, fetches the conflicting revisions, and resolves them. Whilst this keeps the main application simple, the problem with this approach is that there will be a window between a conflict being introduced and it being resolved. From a user's viewpoint, this may appear that the document they just saved successfully may suddenly lose their changes, only to be resurrected some time later. This may or may not be acceptable. Also, it's easy to forget to start the sweeper, or not to implement it properly, and this will introduce odd behaviour which will be hard to track down. == Suggested code for fetch with an update to the first rev and deletes of the other revs. }}} This could either be done on every read (in which case you could replace all calls to GET in your application with calls to a library which does the above), or as part of your sweeper code. And here is an example of this in Ruby using the low-level RestClient. {{{ require 'rubygems' require 'restclient' require 'json' DB="" # Write multiple documents as all_or_nothing, can introduce conflicts def writem(docs) JSON.parse(RestClient.post("#{DB}/_bulk_docs", { "all_or_nothing" => true, "docs" => docs, }.to_json)) end # Write one document, return the rev def write1(doc, id=nil, rev=nil) doc['_id'] = id if id doc['_rev'] = rev if rev writem([doc]).first['rev'] end # Read a document, return *all* revs def read1(id) retries = 0 loop do # FIXME: escape id res = [JSON.parse(RestClient.get("#{DB}/#{id}?conflicts=true"))] if revs = res.first.delete('_conflicts') begin revs.each do |rev| res << JSON.parse(RestClient.get("#{DB}/#{id}?rev=#{rev}")) end rescue retries += 1 raise if retries >= 5 end end return res end end # Create DB RestClient.delete DB rescue nil RestClient.put DB, {}.to_json # Write a document rev1 = write1({"hello"=>"xxx"},"test") p read1("test") # Make three conflicting versions write1({"hello"=>"foo"},"test",rev1) write1({"hello"=>"bar"},"test",rev1) write1({"hello"=>"baz"},"test",rev1) res = read1("test") p res # Now let's replace these three with one res.first['hello'] = "foo+bar+baz" res.each_with_index do |r,i| unless i == 0 r.replace({'_id'=>r['_id'], '_rev'=>r['_rev'], '_deleted'=>true}) end end writem(res) p read1("test") }}} An application written this way never has to deal with a PUT 409, and is automatically multi-master capable. You can see that it's straightforward enough when you know what you're doing. It's just that CouchDB doesn't currently provide a convenient HTTP API for "fetch all conflicting revisions", nor "PUT to supercede these N revisions", so you need to wrap these yourself. I also don't know of any client-side libraries which provide support for this. == Merging and revision history == Actually performing the merge is an application-specific function. It depends on the structure of your data. Sometimes it will be easy: e.g. if a document contains a list which is only ever appended to, then you can perform a union of the two list versions. Some merge strategies look at the changes made to an object, compared to its previous version. This is how git's merge function works. For example, to merge Bob's business card versions v2a and v2b, you could look at the differences between v1 and v2b, and then apply these changes to v2a as well. With CouchDB, you can sometimes get hold of old revisions of a document. For example, if you fetch `/db/bob?rev=v2b&revs_info=true` you'll get a list of the previous revision ids which ended up with revision v2b. Doing the same for v2a you can find their common ancestor revision. However if the database has been compacted, the content of that document revision will have been lost. revs_info will still show that v1 was an ancestor, but report it as "missing". {{{ BEFORE COMPACTION AFTER COMPACTION ,-> v2a v2a v1 `-> v2b v2b }}} So if you want to work with diffs, the recommended way is to store those diffs within the new revision itself. That is: when you replace v1 with v2a, include an extra field or attachment in v2a which says which fields were changed from v1 to v2a. This unfortunately does mean additional book-keeping for your application. = Comparison with other replicating data stores = The same issues arise with other replicating systems, so it can be instructive to look at these and see how they compare with CouchDB. Please feel free to add other examples. == Unison == [ Unison] is a bi-directional file synchronisation tool. In this case, the business card would be a file, say bob.vcf. When you run unison, changes propagate both ways. If a file has changed on one side but not the other, the new replaces the old. (Unison maintains a local state file so that it knows whether a file has changed since the last successful replication). In our example it has changed on both sides. Only one file called "bob.vcf" can exist within the filesystem. Unison solves the problem by simply ducking out: the user can choose to replace the remote version with the local version, or vice versa (both of which would lose data), but the default action is to leave both sides unchanged. >>From Alice's point of view, at least this is a simple solution. Whenever she's on the desktop she'll see the version she last edited on the desktop, and whenever she's on the laptop she'll see the version she last edited there. But because no replication has actually taken place, the data is not protected. If her laptop hard drive dies, she'll lose all her changes made on the laptop; ditto if her desktop hard drive dies. It's up to her to copy across one of the versions manually (under a different filename), merge the two, and then finally push the merged version to the other side. Note also that the original file (version v1) has been lost == [ Git] is a well-known distributed source control system. Like unison, git deals with files. However, git considers the state of a whole set of files as a single object, the "tree". Whenever you save an update, you create a "commit" which points to both the updated tree and the previous commit(s), which in turn point to the previous tree(s). You therefore have a full history of all the states of the files. This, replication is a "pull", importing changes from a remote peer into the local repository. A "pull" does two things: first "fetch" the state of the peer into the remote tracking branch for that peer; and then attempt to "merge" those changes into the local branch. Now let's consider the business card. Alice has created a git repo containing bob.vcf, and cloned it across to the other machine. The branches look like this, where AAAAAAAA is the SHA1 of the commit. {{{{ ---------- desktop ---------- ---------- laptop ---------- master: AAAAAAAA master: AAAAAAAA remotes/laptop/master: AAAAAAAA remotes/desktop/master: AAAAAAAA }}}} Now she makes a change on the desktop, and commits it into the desktop repo; then she makes a different change on the laptop, and commits it into the laptop repo. {{{{ ---------- desktop ---------- ---------- laptop ---------- master: BBBBBBBB master: CCCCCCCC remotes/laptop/master: AAAAAAAA remotes/desktop/master: AAAAAAAA }}}} Now on the desktop she does "git pull laptop". Firstly, it takes a diff between AAAAAAAA and CCCCCCCC and tries to apply it to BBBBBBBB. If this is successful, then you'll get a new version with a merge commit. {{{{ ---------- desktop ---------- ---------- laptop ---------- master: DDDDDDDD master: CCCCCCCC remotes/laptop/master: CCCCCCCC remotes/desktop/master: AAAAAAAA }}}} Then Alice has to logon to the laptop and run "git pull desktop". A similar process occurs. The remote tracking branch is updated: {{{{ ---------- desktop ---------- ---------- laptop ---------- master: DDDDDDDD master: CCCCCCCC remotes/laptop/master: CCCCCCCC remotes/desktop/master: DDDDDDDD }}}} then a merge takes place. This is a special-case: CCCCCCCC one of the parent commits of DDDDDDDD, so the laptop can "fast forward" update from CCCCCCCC to DDDDDDDD directly without having to do any complex merging. This leaves the final state as: {{{{ ---------- desktop ---------- ---------- laptop ---------- master: DDDDDDDD master: DDDDDDDD remotes/laptop/master: CCCCCCCC remotes/desktop/master: DDDDDDDD }}}} Now this is all and good, but you may wonder how this is relevant when thinking about couchdb. Firstly, note what happens in the case when the merge algorithm fails. The changes ''are'' still propagated from the remote repo into the local one, and are available in the remote tracking branch; so unlike unison, you know the data is protected. It's just that the local working copy may fail to update, or may diverge from the remote version. It's up to you to create and commit the combined version yourself, but you are guaranteed to have all the history you might need to do this. Note that whilst it's possible to build new merge algorithms into Git, the standard ones are focussed on line-based changes to source code. They don't work well for XML or JSON if it's presented without any line breaks. The other interesting consideration is multiple peers. In this case you have multiple remote tracking branches, some of which may match your local branch, some of which may be behind you, and some of which may be ahead of you (i.e. contain changes that you haven't yet merged). {{{ master: AAAAAAAA remotes/foo/master: BBBBBBBB remotes/bar/master: CCCCCCCC remotes/baz/master: AAAAAAAA }}} Note that each peer is explicitly tracked, and therefore has to be explicitly created. If a peer becomes stale or is no longer needed, it's up to you to remove it from your configuration and delete the remote tracking branch. This is different to couchdb, which doesn't keep any peer state in the database. ==.
http://mail-archives.apache.org/mod_mbox/couchdb-commits/200910.mbox/%3C20091030130557.27792.43875@eos.apache.org%3E
CC-MAIN-2015-06
refinedweb
2,618
62.38
pytz format challenge task 2 I need help with this challenge. I have attached my code import datetime import pytz fmt = '%m-%d %H:%M %Z%z' starter = datetime.datetime(2015, 10, 21, 4, 29) pacific = pytz.timezone('US/Pacific') local = pacific.localize(starter) def pytz_string: return local.strftime(fmt) 2 Answers Ransom Wright3,014 Points you most likely already passed it but just in case you haven't you made a function named pytz_string not a variable, so you shouldn't need to have it return anything. Sam CerwinskeCourses Plus Student 2,876 Points so it kinda walks you through what it wants as a line in the challenge. pytz_string = local.strftime(fmt)
https://teamtreehouse.com/community/pytz-format-challenge-task-2
CC-MAIN-2021-39
refinedweb
114
66.03
> ASYNC13.rar > WAITFOR.C /* A module of ASYNCx.LIB version 1.02 */ #include #include #include "asyncdef.h" /************************************************************* Wait for the string pointed to by s to be received on the specified port. Wait for up to between t and t+1 18ths of a second for the string to arrive. If mode is 0, the string must match exactly. If the mode is 1, the case of the received string may vary from that of the string s points to. *************************************************************/ int a_waitfor(ASYNC *p,char *s,int t,int mode) {unsigned long t0,far *t1; char c,*cp; if (p) {cp=s; t1=(unsigned long far *)MK_FP(0x40,0x6c); t0=*t1; while(*cp && (*t1)-t0<=(unsigned long)t) if (a_icount(p)) /* a character is available from the port */ {c=a_getc(p); /* read the character */ if (mode) /* the case of the character does not matter */ if (toupper(*cp)==toupper(c)) /* this is the right character */ cp++; /* point cp to the next character to look for */ else /* this is not the right character */ cp=s; /* start cp back at the beginning of the string */ else /* the case of the character does matter */ if (*cp==c) /* this is the character I was looking for */ cp++; /* point cp to the next character to look for */ else /* this is not the right character */ cp=s; /* start cp back at the beginning of the string */ } return (int)(*cp); /* return 0 if the string was found */ } return -1; } /* end of a_waitfor(p,s,t,mode) */
http://read.pudn.com/downloads9/sourcecode/comm/fax/32646/ASYNC13/WAITFOR.C__.htm
crawl-002
refinedweb
248
69.86
Gold Sleeper Trend You Must Know About. Here's the basic idea: metals prices, especially precious metals prices, have been increasing at a faster rate than mining costs. Gold and silver are up 75.9% and 61.7% respectively over the last three years, while the cost of things like power, equipment, and wages have not risen as much (-1.0% for oil, and 5.1% for wages, as examples). Obviously, this is good for producing companies, and we have seen the market's reaction in their share prices. What may not be so obvious is that companies with major, low-grade discoveries in hand that are preparing economic studies on their projects may enjoy a particular window of opportunity. Sometimes 300-day trailing averages are used to project prices used in estimating mining project economics, sometimes 3-year averages are used. In either case, the top line is going to look great, especially since many costs have risen little or even gone down as a result of the crash of 2008. Here at Casey Research, we're predicting more economic turmoil ahead, as the global economy exits the eye of the storm. As that happens, many costs could drop substantially, with economic activity in general. That would hurt the top line for base metal projects (copper, nickel, etc.), but would probably drive it even higher for gold projects (and silver ones that don't rely too heavily on base metal credits). For instance, Osisko Mining (T.OSK), one of the big winners among our recent speculations was able to take advantage of the crash of 2008 to greatly improve its cost projections, and even to order big-ticket items at lower prices and with shorter wait periods. This added wind in the company's sails and helped them to wow the market and raise the capital needed to build their mine with no hedging and little debt. Of course, margins will look better for all projects studied during a period when revenues are rising faster than costs, but that improvement will be an added benefit for high-margin operations, whereas it could be the difference between life and death for lower-margin operations. In other words, known large, low-grade deposits are being discounted because the grades make their economic viability questionable - and that discount will seem overdone while mining margins are unusually high. For example, a company called Geologix Explorations (T.GIX) has a large but low-grade deposit in Mexico. The project has been around for a while, not getting much love from mine analysts - myself included - because the low grade didn't seem to promise robust economics. However, the company reported a preliminary economic assessment recently, with a startling 28% internal rate of return (IRR) at $900 gold, and a terrific 49% IRR at $1,200 gold. The net present value (NPV) -5% figure came in at $258 million at $900 gold ($555 million at $1,200 gold), which was about 650% of the company's market capitalization at the time. The market responded, and the company's stock chart now looks like a hockey stick. Now, over the life of a large mine, which can be 15, 20, 30, or more years, I doubt margins like those that can be reported at this time will be maintained. Does that make IRRs and NPVs published over the next year or two lies? Not necessarily; if the economics are good enough, the project might be able to pay back the capital expenditures needed to build the mine while margins are high. After that, even a low-grade operation might be able to continue operating profitably for decades. And it must be said that whether or not a low-grade projects actually becomes a mine, its owner's share price may still soar, if the company reports credible, robust, and substantial economics. In today's frothy market, in which the obvious winners are all getting huge premiums from the market, it's hard to find anything worthwhile selling cheap. Large projects trading at discounts because of their low grades - but with chances of delivering exceptionally strong numbers while this window of opportunity is open - are just what the doctor ordered. Be careful, however, to do your due diligence, as not all low-grade projects will have what it takes, even with higher metals prices. ["Due diligence" is Louis James middle name. For years, has been picking the best small-cap metals juniors as the Metals Strategist at Casey Research... beating the S&P 500 by more than 8 times and physical gold by 3 times for his subscribers. For a very limited time, you can save $300 on the annual subscription fee for Casey's International Speculator- plus receive Casey's Energy Report FREE for a year! To learn more, click here now.] TweetTweet
http://www.safehaven.com/article/19334/gold-sleeper-trend-you-must-know-about
CC-MAIN-2016-50
refinedweb
806
59.84
? Use a C++ constructor. Execution order will follow definition order in the same translation unit. But between translation units, ordering might still not be consistent with tour requirements, so you might still face the same issue in the end. Therefore, it is often better to use a static variable in a function, where the initialization happens in user-defined constructor. (GNAT automatically finds a suitable global initialization order if such an order can be determined statically at all.) What are the preconditions on C++ type that allows its use in shared memory or read from a file descriptor? #include <iostream> #include <stack> using namespace std; int main () { stack<int> mystack; mystack.push(10); mystack.push(20); mystack.top() -= 5; cout << mystack.top() << endl; return 0; } Forgot Your Password? 2018 © Queryhome
https://www.queryhome.com/tech/2580/accessing-file-scope-variable-from-shared-library-ctor-in-c
CC-MAIN-2020-40
refinedweb
130
57.67
Learning Android app development may seem like a daunting task, but it can open up a huge world of possibilities. You could create the next “hit app” that changes the way we work or interact with each other. Maybe you’ll develop a tool that you can use yourself to improve your workflow. Or perhaps you’ll just gain a new skill that lands you a great job! Also read: Making an app with no programming experience: What are your options? Whatever the case, learning Android app development might not be as tough as you think, as long as you understand what all the different moving parts are for, and have a roadmap to guide you through. This post is that road map! Step 1: Downloading the tools you need for Android app development First, you need to create your development environment so that your desktop is ready to support your Android development goals. For that, you will need Android Studio and the Android SDK. Thankfully, these both come packaged together in a single download that you can find here. Android Studio is an IDE. That stands for “integrated development environment,” which is essentially an interface where you can enter your code (primarily Java or Kotlin) and access all the different tools necessary for development. Android Studio allows you to access libraries and APIs from the Android SDK, thereby giving you access to native functions of the operating system. You’ll also be able to build your app into an APK using Gradle, test it via a “virtual device” (emulator), and debug your code while it runs. With all that said, keep in mind that there are other options available for your Android app development. For example, Unity is a very powerful tool for cross-platform game development that also supports Android. Likewise, Visual Studio with Xamarin is an excellent combination for creating cross-platform apps in C#. We have handy guides to getting started with each of these options: - How to create non-game apps in Unity - An introduction to Xamarin for cross platform Android development Android Studio is the best place for most people to start (with Android game development being an exception), particularly as it provides all these additional tools and resources in a single place. Fortunately, set up is very simple and you only need to follow along with the instructions on the screen. Get set up with Android Studio by following our handy guides: Step 2: Start a new project Once you have Android Studio on your machine, the next step is to start a new project. This is a straightforward process, but you’ll need to make a few decisions that will impact on your Android app development going forward. Go to File > New > New Project. You will now be asked to select a “Project Template.” This defines the code and UI elements that will be included in your new app when it loads. The word “Activity” refers to a “screen” in your app. Thus, a project with “No Activity” will be completely empty, apart from the basic file structure. A “Basic Activity” on the other hand will create a starting screen for your app and will add a button in the bottom and a hamburger menu at the top. These are common elements in many Android apps, so this can save you some time. That said, it can also risk making things more complicated when you’re first getting to grips with development. For that reason, we’re going to choose the “Empty Activity.” This will create an activity and some files for us, but it won’t add a lot of additional code. Choose a name and “package name” for your new app. The name is what your audience will see when the app is installed on their device. The package name is an internal reference used by Android to differentiate it from other apps. This should be composed using your top level domain (e.g. .com), domain name, and app name. For example: com.androidauthority.sampleapp. If you don’t have a domain or a company, just use “com” followed by something that appeals to you! You’ll also need to decide where you want the files to be saved and what language you’re going to code in: Java or Kotlin. Java vs Kotlin for Android app development One of the biggest decisions you’ll need to make as an Android developer is whether you’re going to learn Kotlin or Java. Both languages are officially supported by Google and Android Studio, but they have some distinct differences. Java has been supported by Google the longest and is what developers have been using to craft Android apps for years. Java is also one of the most in-demand programming languages in the world, which makes it a great choice for those who want to begin a career in development. As the oldest Android programming language, there is also slightly more support for Java vs Kotlin, although it’s not by much. Kotlin on the other hand has become Google’s preferred choice for Android development. This is the default when starting a new app, and it is likely to become more common going forward. Kotlin is also significantly easier to get to grips with if you’re a complete beginner. For these reasons, Kotlin is probably the language of choice for Android developers that are learning for fun, or that have no aspirations to develop for other platforms. However, Java makes more sense if you’re interested in becoming a professional developer. You can learn more about the two options here: Minimum SDK Finally, you also need to consider your Minimum SDK. This is the lowest version of Android that you want your app to support. The lower you make this number, the broader your potential audience will be. Keep in mind that there is a relatively low adoption rate for the latest versions of Android, so sticking with the latest update will prevent a lot of users from trying your creation. If we leave the version as the default (Android 10), then we only support 8.2% of devices! Google: do better. However, you will only be able to access the latest features of Android if you target a more recent version. If you like the sound of supporting chat bubbles, then you’ll want to stick with the most recent version. Step 3: Familiarize yourself with the files I remember the first time I tried Android app development. I loaded up Android Studio and was immediately baffled by what I saw. There are just so many different files, multiple types of code, folders, and more! This was worlds away from the single blank file I was used to working with in Python or even QBasic (anyone remember QBasic??). This can be rather daunting, but here’s what you need to know. The file that is open is MainActivity.java or MainActivity.kt. This is the main logic file for the activity that is going to define how your app behaves. Look on the left, and you’ll see that this file is found in: MyApplication > app > src > main > java > com > companyname > myapplication. The folders used are important for Android app development, as they help Android Studio and Gradle to find everything and build it correctly (more on Gradle in a moment). Suffice to say, you can’t just rename these as you please! You’ll notice that there is already some code on the main page. This is what we call “boilerplate code,” meaning that it is code that is almost identical across different app projects and that is needed to make basic functions work. Boilerplate code is what you’ll find yourself typing out over and over again! One of the benefits of Kotlin is that it requires less boilerplate, meaning that you’ll have less code on your screen if that is what you chose. Introducing layout files The role of this code is to tell Android where the associated layout file is. A layout file is slightly different from a Kotlin/Java file. This defines the way that an activity looks, and lets you add things like buttons, text, and browser windows. You’ll find this file in: MyApplication > app > src > res > layout. It will be called activity_main.xml. Note that files stored in the resources folder can’t use capitals; they need to use the underscore symbol to distinguish different words. Double click on this file and it will open in the main window where you edit your code. Notice that you can switch between the open files using tabs along the top. You can view this file via the “Code” view, the “Design” view, or a split view that shows these windows side-by-side. There are buttons to switch mode in the top right. In the design view, you can actually drag and drop different widgets onto the screen. The code view shows you a load of XML script. When you add new widgets via the Design view, this script will update. Likewise, you can tweak properties of the widgets (called “views”) in here and see them reflected in real-time via the Code view. In the vast majority of apps, you’ll need to create a new Java/Kotlin file and a corresponding XML file, each time you want a new activity. And for those that were wondering: yes, that means you have to learn either Kotlin or Java and XML. This is a bit of a headache, but it actually simplifies the process in the long run. For an introduction to using XML, check out this guide: To get to grips with the different views and what they do: The other files and folders There are lots more files and folders here though, so what do they all do? In truth, you don’t need to know what everything here is. But some things that are useful to know about: The Android Manifest: This is an XML file in the res folder that defines important features of your app. That includes the orientation of the app, the activities that you want to be included in it, the version, etc. For more, read: Drawable: This folder is found in res. This is where you will put things like images that you want to reference later. Values: This resource folder is a useful place to store values that will be used globally across your app. For example, this can include color codes (making it easy for you to change the look of your entire app) or strings (words). You’ll define these values in individual XML files, such as colors.xml. Gradle: Gradle is the tool that takes all your files and bundles them into a workable APK for testing. It is also useful for generating previews etc. You won’t need to worry about the files in here often, but if you want to add a “dependency,” this is where you will do it. Dependencies are external libraries that let you access additional functionality from within your own code. Learn more about Gradle and how it works here: Step 4: Test your app The first thing that you are supposed to do when familiarizing yourself with any new programming language, is to create an app that says “Hello World.” Thankfully, this is very easy in this case seeing as that’s what the code that’s already here does! If you look at the XML, it includes a small label that just says: Hello World! If you look at the controls along the top, you’ll see there’s a little green play arrow. On the left of this is a drop-down menu, with a phone name in it. When you installed Android Studio, this should also have installed an Android system image along with the Virtual Device Manager. In other words, you should already have an Android emulator set up and ready to go! By clicking on this green arrow, you’ll be able to launch that and test your app! Notice that this will also let you use the emulated phone as though it were a real device. You can change the settings for your virtual device – such as screen size, Android version, space etc. – by going to Tools > AVD Manager. You can also download new system images here. Make sure that your virtual device meets or exceeds the minimum SDK you set at the start. Alternatively, you can try plugging a physical device into your computer and using this to test your new app. You’ll need to turn on Developer Options though, and enable USB Debugging. Step 5: Make a thing! The best way to learn Android app development is by doing! That means you should have a stab at editing the code in front of you, to see if you can make it do something new.. First, you’ll need to place this line inside the TextView tag in your activity_main.xml: android:id="@+id/helloButton" android:onClick="onHelloButtonClick" This will give the text label the name “helloButton” and will state that the method “onHelloButtonClick” will reference this view. We’re going to add that to our code in a moment. Now you can add the following code to your MainActivity. If you see any text appear red as you are typing it, that means you need to “import” that code from the Android SDK. Click on the red text then press Alt + Enter and Android Studio will do this for you automatically. In short, this tells Android that you are referencing a library that is a part of the Android SDK. (The following example is written in Java.) public class MainActivity extends AppCompatActivity { TextView helloButton; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); helloButton = (TextView) findViewById(R.id.helloButton); } public void onHelloButtonClick (View v) { helloButton.setText("Howdy World!"); } } In this code example, we first create an “object” in the code called “helloButton.” Inside our “onCreate” method (everything within the curly brackets) we then tell Android that this object represents the button in our layout file. The code you place here is what will run first when you launch an app. Next, we create the method that runs when someone clicks on the button. Once that happens, we can then change the text on said button. Notice that once again, Kotlin requires significantly fewer lines to achieve the same thing! Run the app and you should now see that when you click the button, the text changes! This is a very simple app, but it demonstrates the basics of how Android app development works. Generally, you will be creating new on screen elements in the layout file, then defining how they behave in the associated Java or Kotlin file. As you get more advanced, you’ll need to start manipulating and storing data. To do this, you’ll use variables which contain numbers and strings (words). We have Java tutorials that will help you get started: Once you’ve read through that, you’ll have a basic idea of how Java works, and all that is left is to learn how you can apply these skills to Android app development. To that end, a great strategy is to pick a project and then work on that. And wouldn’t you know it: we have a ton of great projects to try out! Here are just a few: - An easy first project for Android app development noobs: math game - Let’s make a simple Star Wars quiz! - How to write your first Android game in Java The key is not to try and learn “all of Android app development” but to set your sights on a realistic first project. You’ll keep learning as you add new features and want to do new things, and having a goal will keep your learning fun and structured. Before you know it, you’ll be a pro!
https://www.androidauthority.com/android-app-development-1128595/
CC-MAIN-2020-34
refinedweb
2,673
70.84
Prevent review requests created with history from adding regular diffs Review Request #9852 — Created April 3, 2018 and submitted We now have a notion of a review request that is created with history support. A review request created with this flag cannot have a regular diff attached to it. Instead, an empty DiffSet must be created and commits attached to that. (Support for which is coming in another patch.) This option is now specified at review request creation time and cannot be changed once the review request is created (i.e., after save()is called and the primary key has been assigned). It can be specified via a new field on the review request resources which defaults to the old (i.e., no history) behaviour. Ran unit tests. Needs to be the full path. Missing ugettext(). Is this going to be seen by end users ever? If so, we shouldn't be talking DiffSets and DiffCommits, and need to rethink what we're telling users. Needs to be the full module path. This will be just a tad more clear with "... cannot both be set to True". Missing ending paren. This would probably be better as: return (self.extra_data and self.extra_data.get(..., False)) Can you also have this check the resulting errors? This should check the message. Can you group those create_flags together? Spaces around *. Should this be two tests? This should check the error. Here, too. Should have ~at the beginning. These are annoyingly long, and means this must be kept up-to-date. Maybe just have this refer to the list of status on the class (with a reference to it). (Also, these would be :py:attr, not :py:data). We can probably just talk about publishing rather than have a long line for the function reference. Needs to be the full reference. Can you have this say explicitly that "This field cannot be set if create_with_history is set" ? And similar here. Can you put the create_arguments together? Two blank lines. This can fit on one line. This doesn't need to be indented. item_rsp['extra_data']can fit right after the assertEqual(. Change Summary: Addressed feedback. Checks run (1 failed, 1 succeeded) flake8 Change Summary: Hide DVCS fields when the feature is disabled. Do not allow creation of DVCS review requests when feature is disabled.
https://reviews.reviewboard.org/r/9852/
CC-MAIN-2019-09
refinedweb
388
70.19
Okay so the deal is, our instructor wrote what the class is supposed to look like (will be included with the code) and also gave us the Input, Preconditions, Process, Output, and Postcondtions of each function that is supposed to be used. I'm having trouble with an insert, delete, and clear functions dealing with the array. There is no USER input, we're just supposed to test the functions within the main function. Here's the code I've written thus far. Any help would be much appreciated!! Thanks for your time. Code: #include <iostream> using namespace std; #define DataType int #define Boolean bool #define ARRAYSIZE 0 class SeqList { private: int size; int listitem[ARRAYSIZE]; public: SeqList(void); int ListSize(void)const; Boolean ListEmpty(void) const; Boolean Find(DataType& item) const; DataType GetData(int pos) const; void Insert(const DataType& item); void Delete(const DataType& item); DataType DeleteFront(void); void ClearList(void); }; SeqList::SeqList() { size = 0; } int SeqList::ListSize() const { return size; } Boolean SeqList::ListEmpty() const { if(size == 0) return true; else return false; } Boolean SeqList::Find(DataType& item) const { if(item == listitem[size]) { return true; } else return false; } void SeqList::Insert(const int& item)//Input: Item to insert in the list, no preconditions, Process: //Add the item at the rear of the list., No output, Postconditions: The list has a new item; its size //is increased by one { size++; listitem[size] = item; } DataType SeqList::GetData(int pos) const { if(pos >= size) abort(); else return pos; } void SeqList::Delete(const int& item)//Input: Value to delete from the list, No preconditions, //Process: Scan the list and remove the first occurrence of the item in the list. Take no action if //the item is not in the list, no output, Post conditions: If a match occurs, the list has one fewer //items { if(item != listitem[size]) { abort(); } else for(int i = 0; i < size; i++) { listitem[size]=listitem[size-1]; } } //DataType DeleteFront(void)//No input, Preconditions: List must not be empty, Process: //Remove the first item from the list, output: Return the value of the item that is removed //Post conditions: The list has one fewer items //{ //**Code for delete only with the index listitem[0] //} void ClearList(void)//No input, no preconditions, process: Remove all elements from the list and //set the list size to 0 { } int main() { SeqList A; A.Insert(12); cout << A.ListSize()<<endl; if(A.ListEmpty()) cout<< "The List Is Empty" << endl; else cout<< "The List Is Not Empty" << endl; return 0; } Each time I try to write the delete function, I get a runtime error. Stack around A is corrupt. Not sure what to do. iTsweetie
https://www.daniweb.com/programming/software-development/threads/186053/class-troubles
CC-MAIN-2017-13
refinedweb
442
51.62
#include "LPC177x_8x.H" void f1(U8 *ptr); void f2(U8 *ptr); void (*fpTest1[2])(U8 *ptr1)={&f1,&f2}; void (*const fpTest[2])(U8 *ptr1)={&f1,&f2}; U8 b[2][2]={{0,1},{2,3}}; int main(void) { (*fpTest[0])(b[0]); // fpTest[0]=&f2; //(*fpTest[0])(b[0]); (*fpTest[1])(b[1]); } void f1(U8 *ptr) { volatile U8 a=0; volatile U8 b=0; a=*(ptr); b=*(ptr+1); } void f2(U8 *ptr) { volatile U8 a=0; volatile U8 b=0; a=ptr[0]; b=ptr[1]; } Why does fpTest exist in On-Chip Flash and fpTest1 in SRAM? An initialized variable needs to have the initial value stored in flash. And on boot, the startup code needs to copy this initial value into the RAM variable. So both flash space and RAM space gets consumed. If the variable is "const", then the compiler knows that the program isn't allowed to change the initial value. So no need to copy any data into RAM - the program can just as well have the "variable" stored directly in flash. So for a 4-byte pointer, you save 4 byte RAM. So this is a SRAM optimization trick. Since lpc1778 has a relatively larger flash compared to SRAM,all const can be placed in the flash by the compiler saving SRAM for stack and static and global variables?? It isn't a trick. And it's irrelevant if the processor has more or less flash compared to RAM. Initialized RAM must get the initial values from from somewhere - and that somewhere would be the flash in a LPC17xx chip (unless you downloaded the program directly into RAM using a JTAG interface). So if the pointer value is already stored in flash - what reason would it be to make a second copy of the pointer in RAM? Copying the pointer would consume RAM, and it would slow down the boot process since the startup code has four more bytes to copy. The only cost of having const data in flash is that there might be some extra wait states for the memory access - RAM is normally faster than flash. In the LPC17xx, the program is run from flash. So keeping const data in the flash is a logical thing to do. On a PC, the program is run in RAM. So there is no flash to store the pointer in - a running program can't use a pointer stored on the disk. So that is a reason why a PC compiler doesn't do this. But a PC compiler will still separate const variables from non-const variables and keep them in a separate memory segment - const data can be thrown away and reloaded from disk if you run out of RAM. Non-const data needs to be written to a swap file and then later restored from the swap file. So in the end, both a PC compiler and the Keil compiler will perform the same type of optimization - it's just that the optimization is adapted to the different requirements of the target hardware. So no tricks involved. Another advantage of keeping const data in write-protected memory, is that a bug in the program can't corrupt the data. And an attempt to type cast away the "const" and assign to the variable will catch this in the form of a memory access error - it is considered a direct error for a program to try to write to a const variable. So a program trying that stunt should be shot on sight. Thanks. So even if the const pointer tries to point to an invalid address(address of function not assigned for it to point to) by accident while executing the code from Flash, it will cause an exception ? Different rules for different processors what memory regions that may produce access violations. It's the memory controller "glue" that decides if a memory region has write protection or both read+write protection or if you are free to both read and write. So the memory controller can allow the program to make read and write accesses even to ranges where there are no memory - it's just depends on what specific processor that is used. And in some cases on what register initializations your program has made. But besides violations from accessing invalid memory ranges, you can also get exceptions by trying to execute invalid instructions. Which may happen if a pointer points into "air". Or if you get a stack overflow, or the program has problems with memory overwrites. Or maybe there are problems with the memory so you get bit errors. Note that you do not need to perform any assign to the pointer - the compiler/linker together with the startup code will assign initial values to the pointers. And as long as you don't upload a broken binary to the processor, there will be two functions at the destination addresses for the two pointers. View all questions in Keil forum
https://community.arm.com/developer/tools-software/tools/f/keil-forum/41197/constant-function-pointer-in-on-chip-flash
CC-MAIN-2021-04
refinedweb
834
68.3
Something changed in Grails 3 and how datasources are configured. If you have datasources defined like this: dataSources: dataSource: pooled: true jmxExport: true driverClassName: org.h2.Driver username: sa password: dbCreate: create-drop url: jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE secondary: pooled: true jmxExport: true driverClassName: org.h2.Driver username: sa password: dbCreate: create-drop url: jdbc:h2:mem:devDb2;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE You can use the “other datasource” ( secondary) in a Grails domain class very simply: class Book { String title static constraints = { } static mapping = { datasource 'secondary' } } And you can use the dataSource object by name in a Grails service like this: class MyService { def dataSource ..... } So you would think that the following would work: class MyService { def secondary ..... } but it doesn’t… Grails doesn’t know how to wire that secondary datasource so it’s null. Instead, you have to put it in the resources.groovy or use Spring’s @Autowired on it like so: class BookSqlSecondaryService { @Autowired @Qualifier('dataSource_secondary') def secondary } The documentation will be updated in Grails 3.2 but it’s still the case in Grails 3.0.x and Grails 3.1.x. One thought on “Using secondary datasources in Grails 3” I am unable to use Qualifier in my Grails 3.1.4 project. Any suggestions? startup failed: C:\gcprime_local\gcprime-3\gcprime\grails-app\services\adminrequest\BasicReports Service.groovy: 11: unable to resolve class Qualifier , unable to find class fo r annotation @ line 11, column 1. @Qualifier(‘dataSource_workflow’) ^ I have added the following imports and that did not help import org.springframework.beans.factory.annotation.Autowired import org.springframework.beans.factory.annotation.Qualifier Dave — I hadn’t used this in Grails 3.1 before (I was in the 3.0 world)… AL (below ) said it worked in Grails 3.1.8… so maybe upgrade your Grails version? Thank you very much Mike! After a couple hours trying to figure out how to get this done in Grails 3, your advice did the trick. Note that I’m using Grails 3.1.8, so I didn’t run into the issue that Wayne did in the previous post.
https://objectpartners.com/2016/03/09/using-secondary-datasources-in-grails-3/
CC-MAIN-2019-04
refinedweb
361
50.12
Resolve Conflicts in Refactorings All ReSharper's refactorings are applied solution-wide, therefore many files can be affected and some changes may conflict with the existing code. If there are any conflicts, ReSharper detects them and displays the list of conflicts on the last page of the refactoring wizard. For example, here are some conflicts that appeared when applying the Safe Delete refactoring to a method: There are two types of conflicts: Usages that can be deleted without breaking the compilation are marked with the icon. For example, this can be a call to a method that you delete or assignment of field that you delete. Usages that cannot be deleted without breaking the compilation are marked with the icon. For example, you rename a class and another class with the same name already exists in the same namespace. You can do one of the following; Use the clickable links to navigate to the source code of the conflicts and resolve them manually. After resolving conflicts, click Refresh. If there are no conflicts with the 'error' severity, you can click Next and be sure that your solution will compile after the refactoring. You can click Next to finish the refactoring even if the error-severity conflicts are reported. In this case, you will have to correct the resulted errors after the refactoring. Click Cancel to stop the refactoring operation, no changes will be made.
https://www.jetbrains.com/help/resharper/Refactorings__Resolving_Conflicts_in_Refactorings.html
CC-MAIN-2020-24
refinedweb
234
60.35
A Simple ROBOTS.TXT Handler in EPiServer We’re moving a client from a Dreamweaver-based site to an EPiServer site. One of the only reasons they had left to FTP into the site at any time was to manage the robots.txt file. We wanted to eliminate that reason so we could shut FTP off completely. This is what we did: 1. Added a Long String property on the Start Page called “RobotsTxt”. We dumped the contents of the existing robots.txt in there. 2. Add this line to our web.config: <add name="RobotsTxtHandler" preCondition="integratedMode" verb="*" path="robots.txt" type="Blend.EPiServer.RobotsTxtHandler, [Assembly Name]" /> 3. Compiled this class into our project: using System.Web; using EPiServer; using EPiServer.Core; namespace Blend.EPiServer { public class RobotsTxtHandler : IHttpHandler { public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write(DataFactory.Instance.GetPage(PageReference.StartPage).Property["RobotsTxt"].ToString()); } } } Now, the client edits their robots.txt text in EPiServer. Calling “/robots.txt” will dump the contents of that Start Page property into the output as plain text. (The larger principle at work here is that the Start Page is generally treated as a Singleton in EPiServer – there will only ever be one per site. This means we tend to load it up with global properties – data you just need to store and manage in EPiServer, but will never publish as its own page. Things like this robots.txt text, the TITLE tag suffix for the site, random Master Page text strings, etc.)
https://world.episerver.com/blogs/Deane-Barker/Dates/2010/9/A-Simple-ROBOTSTXT-Handler-in-EPiServer/
CC-MAIN-2020-45
refinedweb
260
61.43
= {'Mark': 100.0, 'Peter': 50.0, 'John': 25.0}for i, j in d.items(): print (i + ' pays ' + str(j)) “d = {'Mark': 100.0, 'Peter': 50.0, 'John': 25.0}for i, j in d.items(): print (i + ' pays ' + str(j))” Code Answer d = {'Mark': 100.0, 'Peter': 50.0, 'John': 25.0}for i, j in d.items(): print (i + ' pays ' + str(j)) whatever by Amused Antelope on May 02 2020 Comment 0 d = {'Mark': 100.0, 'Peter': 50.0, 'John': 25.0} for i, j in d.items(): print (i + ' pays ' + str(j)) Add a Grepper Answer Whatever answers related to “d = {'Mark': 100.0, 'Peter': 50.0, 'John': 25.0}for i, j in d.items(): print (i + ' pays ' + str(j))” What will be output when you will execute following c code? #include<stdio.h> int main(){ printf("%d\t",sizeof(6.5)); printf("%d\t",sizeof(90000)); printf("%d",sizeof('A')); return 0; what is the tracing output of the code below x=10 y=50 if(x**2> 100 and y <100): print(x,y) printf("%d", 10 ? 0 ? 5:1:1:12) what will print printf fill with 0 print('Hello world!\How are you?\ and(I'm fine).') syntax error print(' *') print(' ***') print('****') print('*****') print(' ***') Problem Statement Print the following output: \ Input Format IN Output Format \ print(1) in python Write the output for the following statement: [2] System.out.println("The\n\"Best Pie\'s\"\n in London" ); i = 1 while i <= 100: print(i * *") i = i + 1 system.out.println(h [2] [1] [1] [0]); what will be the output of the following python code? i = 0 while i < 5: print(i) i += 1 if i == 3: break else: print(0) print(s[::-1]) What is the expected value of print to this program X = 2 Y = 3 Z = X + Y print(Y) #Z `12` print () what will be the output of the following python code? x = 123 for i in x: print(i) Whatever queries related to “d = {'Mark': 100.0, 'Peter': 50.0, 'John': 25.0}for i, j in d.items(): print (i + ' pays ' + str(j))” d = {'mark': 100.0, 'peter': 50.0, 'john': 25.0}for i, j in d.items(): print (i + ' pays ' + str(j)): Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "": dial unix /var/run/docker.sock: connect: permission denied. See 'docker run docker permission denied/ dont use sudo Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get " install keras No module named 'keras' import keras pip create next app host file windows 10 find out process using port windows brooklyn nine nine npm Module not found: Can't resolve @mui react toastify install mui npm concurrently yarn concurrently yarn parallel run npm parallel run how to query in firestore .
https://www.codegrepper.com/code-examples/whatever/d+%3D+%7B%27Mark%27%3A+100.0%2C+%27Peter%27%3A+50.0%2C+%27John%27%3A+25.0%7Dfor+i%2C+j+in+d.items%28%29%3A+print+%28i+%2B+%27+pays+%27+%2B+str%28j%29%29
CC-MAIN-2022-27
refinedweb
487
78.75
Schema Registry Type safety is extremely important in any application built around a message bus like Pulsar. Producers and consumers need some kind of mechanism for coordinating types at the topic level lest a wide variety of potential problems arise (for example serialization and deserialization issues). Applications typically adopt one of two basic approaches to type safety in messaging: - A "client-side" approach in which, say, moisture sensor readings. - A "server-side" approach in which producers and consumers inform the system which data types can be transmitted via the topic. With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - For the "client-side" approach, producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - For the "server-side" approach, Pulsar has a built-in schema registry that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. Note Currently, the Pulsar schema registry is only available for the Java client, Python client, and C++ client. Basic architecture Schemas are automatically uploaded when you create a typed Producer with a Schema. Additionally, Schemas can be manually uploaded to, fetched from, and updated via Pulsar's REST API. Other schema registry backends Out of the box, Pulsar uses the Apache BookKeeper log storage system for schema storage. You can, however, use different backends if you wish. Documentation for custom schema storage logic is coming soon. How schemas work Pulsar schemas are applied and enforced at the topic level (schemas cannot be applied at the namespace or tenant level). Producers and consumers upload schemas to Pulsar brokers. Pulsar schemas are fairly simple data structures that consist of: - A name. In Pulsar, a schema's name is the topic to which the schema is applied. - A payload, which is a binary representation of the schema - A schema type - User-defined properties as a string/string map. Usage of properties is wholly application specific. Possible properties might be the Git hash associated with a schema, an environment like devor prod, etc. Schema versions In order to illustrate how schema versioning works, let's walk through an example. Imagine that the Pulsar Java client created using the code below attempts to connect to Pulsar and begin sending will happen in light of each scenario: Schemas are versioned in succession. Schema storage happens in the broker that handles the associated topic so that version assignments can be made. Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. Supported schema formats The following formats are supported by the Pulsar schema registry: - None. If no schema is specified for a topic, producers and consumers will handle raw bytes. String(used for UTF-8-encoded strings) - JSON - Protobuf - Avro For usage instructions, see the documentation for your preferred client library: Support for other schema formats will be added in future releases of Pulsar. Managing Schemas You can use Pulsar admin tools to manage schemas for topics.
https://pulsar.apache.org/docs/2.3.1/concepts-schema-registry/
CC-MAIN-2022-40
refinedweb
556
53.31
Computer Science Archive: Questions from December 13, 2010 - Anonymous askedThe purpose of this problem is to show the effect of information propagation within a calc... Show more Problem 2 The purpose of this problem is to show the effect of information propagation within a calculation. Assume that you have a square array of points, 16 X 16, and that the value of the potential function on the boundary is zero on the top row, 20 on the bottom row and 10 along all other boundary points. Write a computer program in C that iterates until no values changes by more than 0.1 percent. Let this be the initial state of the problem for the following problem parts below, a, b and c. a. Increase the boundary point at the upper left corner to a new value of 20.. b. Now restore the mesh to the initial state for problem a. Change the problem so that, in effect, the upper left corner is rotated to the bottom right corner. To do this, scan the rows from right left instead of left to right and scan from bottom to top instead of top to bottom. Perform five iterations of the Poisson solver and observe the values obtained. c. Both problems a. and b. eventually converge to the same solution because the initial data are the same and the physical process modeled is the same. However, the results obtained from the problems a. and b are different after five iterations of the Poisson solver. Explain why they are different. Which of the two problems has faster convergence? Why?• Show less0 answers - Anonymous askedThere are 128 fluid ounces per gallon. One gallon is equivalent to about 3.79 liters. Write a progra... Show more There.• Show less1 answer - Anonymous askedWrite a program that will receive the day of the year as an integer (1 to 365) from the command line... More »1 answer - Anonymous asked0,1,1,2,3,5... Show moreIn mathematics, the Fibonacci numbers are the numbers in the following integer sequence: 0,1,1,2,3,5,8,13,21,34,55,89,144,... By definition, the first two Fibonacci numbers are 0 and 1, and each subsequent number is the sum of the previous two. Write a program that will receive an integer N as a command line argument. Your program should output the numbers in the Fibonacci sequence that are less than or equal to N. It should then output on the next line how many Fibonacci numbers were printed until the program ends. For example, for N=12, your program should print the sequence 0,1,1,2,3,5,8 and then print on another line the number 7 meaning there are 7 numbers in the Fibonacci sequence less than or equal to 12. • Show less1 answer - Anonymous askedIn geometry, Heron's (Hero's) formula, named after Heron of Alexandria, states that the area A of... Show more In geometry, Heron's (Hero's) formula, named after Heron of Alexandria, states that the area A of a triangle whose sides have lengths a, b, and c is the formula above. where s is the semi-perimeter of the triangle: Note that, as one of the properties of any triangle, the sum of the lengths of any two sides of a triangle must be greater than the third side. Write a program that will receive interactively the values for a, b, and c. Your program should determine whether a, b, and c can be the length of the sides of a triangle given the above-mentioned property. If it does, determine the area of the triangle formed using Heron's formula; if not, output a message stating that a, b, and c do not form a triangle.• Show less1 answer - Anonymous asked1. Ge... Show morewrite a Sub named FunctionDemo and a Function named triRV which do the following: Function Demo 1. Gets two integer numbers from the user, a and b where b > a and displays them in the 'yellow' area in Column I. (No error checking on the user input is needed - we'll assume the user enters values appropriately.) 2. Sets up a loop to generate & display 10 random variables from the triRV function in the "blue" area. (e.g. uses the function triRV(a,b) 10 times.) 3. Attach your FunctionDemo macro to a button on this sheet. triRV(a,b) Function triRV generates random variables from a triangular distribution. VBA uses sqr() as its square root function and rnd() as its uniform distribution random number generator. ( rnd() provides equally likely numbers between 0 and 1 - e.g. values like .31879284 or .86631098) The triRV function's definition is as follows: U = rnd() if U <= .5 then triRV = a + (b-a)* sqr(U/2) else triRV = a + (b-a)*(1 - sqr((1-U)/2) ) end if • Show less0 answers - Anonymous askedIn protocol rdt3.0, the ACK packets flowing from the receiver to the sender do not have sequence num... Show more In protocol rdt3.0, the ACK packets flowing from the receiver to the sender do not have sequence numbers (although they do have an ACK field that contains the sequence number of the packet they are acknowledging). Why is it that our ACK packets do not require sequence numbers?• Show less0 answers - Anonymous askedThe object of this problem is to explore numerical calculations for the continuum computat... Show moreProblem 1 The object of this problem is to explore numerical calculations for the continuum computation model on mesh or grid type multiple processors. You should model your problem in C. Assume that you have a square array of points, 16 X 16, and that the value of the potential function on the boundary is zero on the top row, 20 on the bottom row and 10 along all other boundary points. 1.1 Initialize the potential function to 0 on all interior points.. 1.2 Repeat the process in the previous problem part 1.1, except you update a point as soon as you have computed the new value and use the new value when you reach a neighboring point. You should scan the interior points row by row from top to bottom and from left to right within rows. 1.3 The second process seems to converge faster. Give a intuitive explanation of why this might be the case. 1.4 How do your findings relate to the interconnection structure of a processor for solving this problem? • Show less0 answers - Anonymous askedA multiprocessor node must sometimes send a message to more than one other processor, a ta... Show moreProblem 1 A multiprocessor node must sometimes send a message to more than one other processor, a task referred to as “broadcasting”. Suppose that a node P0 in an n-dimensional hypercube system has to broadcast a message to all 2n – 1 other processors. The broadcasting is subject to the constraints that the message can be forwarded (retransmitted) by a node only to a neighboring node, and each node can transmit only one message at a time. Assume that each message transmission between adjacent nodes requires only one time unit. I a two-dimensional system, for example, P0 could broadcast a message MESS as follows. At time t = 0, P0 sends MESS to P1. At t = 1, P0 sends MESS to P2 and P1 sends MESS to P3, thus completing the broadcast in 2 time units. 1.1 Construct a general broadcasting algorithm for the n-dimensional case that allows a message to reach all nodes in n time units. Specify clearly the algorithm used by each node to determine the neighboring nodes to which it should forward an incoming message. 1.2 Apply your algorithm on the designing the 8 x 8 heat-flow program (Laplace equation) to run on a 64-processor hypercube computer that stimulates a mesh computer. Specify completely the function you use to map the problem’s gridpoints onto the nodes of the hypercube. 1.3 Stimulate the 16 x 16 heat-flow problem, i.e. build a C program to stimulate the operation of a net of 2 x 2 processor nodes, assuming reasonable communication cost. Tabulate your results. Comment how would the problem scale to a 256 x 256 heat-flow problem? • Show less0 answers - Anonymous askedSuppose that a counter begins at a number with b 1's in its binary representation , rather than ... More »1 answer - Anonymous askedI have to write a program that calculates the discharge of an open flow channel, asking the user to... Show moreI have to write a program that calculates the discharge of an open flow channel, asking the user to input the points and their coordinates. then the program will calculate the distances between points, the cross-sectional area the perimeter and then the discharge . help needed urgently :] thanks • Show less0 answers - Anonymous askedWrite a program that outputs an appropriate message for grrouping symbols,such as parentheses and br... More »1 answer - AnimatedLeopard6328 askedCreateRandom... Show moreJava Help!!! I need all to be done not just one!!! Question Details Use the Record.java CreateRandomFile.java WriteRandomFile.java ReadRandomFile2.java and add the following 2 fields double gpa; char sex; ?Record.java // Record class for the RandomAccessFile programs. import java.io.*; public class Record { private int account; private String lastName; private String firstName; private double balance; private String socsec; // Read a record from the specified RandomAccessFile public void read( RandomAccessFile file ) throws IOException { account = file.readInt(); char first[] = new char[ 15 ]; for ( int i = 0; i < first.length; i++ ) first[ i ] = file.readChar(); firstName = new String( first ); char last[] = new char[ 15 ]; for ( int i = 0; i < last.length; i++ ) last[ i ] = file.readChar(); lastName = new String( last ); balance = file.readDouble(); char ssn[] = new char[ 15 ]; for ( int i = 0; i < ssn.length; i++ ) ssn[ i ] = file.readChar(); socsec = new String( ssn ); } // Write a record to the specified RandomAccessFile public void write( RandomAccessFile file ) throws IOException { StringBuffer buf; file.writeInt( account ); if ( firstName != null ) buf = new StringBuffer( firstName ); else buf = new StringBuffer( 15 ); buf.setLength( 15 ); file.writeChars( buf.toString() ); if ( lastName != null ) buf = new StringBuffer( lastName ); else buf = new StringBuffer( 15 ); buf.setLength( 15 ); file.writeChars( buf.toString() ); file.writeDouble( balance ); if ( socsec != null ) buf = new StringBuffer( socsec ); else buf = new StringBuffer( 15 ); buf.setLength( 15 ); file.writeChars( buf.toString() ); } public void setAccount( int account ) { this.account = account; } public int getAccount() { return account; } public void setFirstName( String firstName) {this.firstName = firstName;} public String getFirstName() { return firstName; } public void setLastName( String l ) { lastName = l; } public String getLastName() { return lastName; } public void setBalance( double b ) { balance = b; } public double getBalance() { return balance; } public void setSocsec( String socsec ) { this.socsec = socsec; } public String getSocsec() { return socsec; } // NOTE: This method contains a hard coded value for the // size of a record of information. public static int size() { return 102; } } ? Createrandomfile.java // This program creates a random access file sequentially // by writing 100 empty records to disk.import java.io.*; public class CreateRandomFile { private Record blank; private RandomAccessFile file; public CreateRandomFile() { blank = new Record(); try { file = new RandomAccessFile( "credit.dat", "rw" ); for ( int i = 0; i < 100; i++ ) blank.write( file ); } catch( IOException e ) { System.err.println( "File not opened properly\n" + e.toString() ); System.exit( 1 ); } } public static void main( String args[] ) { CreateRandomFile accounts = new CreateRandomFile(); } } WriteRandomFile.java // This program uses TextFields to get information from the // user at the keyboard and writes the information to a // random access file. import java.io.*; import java.awt.*; import java.awt.event.*; public class WriteRandomFile extends Frame implements ActionListener { // TextFields where user enters account number, first name, // last name and balance. private TextField accountField, firstNameField, lastNameField, balanceField,socsecField; private Button enter, // send record to file done; // quit program // Application other pieces private RandomAccessFile output; // file for output private Record data; // Constructor -- intialize the Frame public WriteRandomFile() { super( "Write to random access file" ); data = new Record(); // Open the file try { output = new RandomAccessFile( "credit.dat", "rw" ); } catch ( IOException e ) { System.err.println( e.toString() ); System.exit( 1 ); } setSize( 300, 150 ); setLayout( new GridLayout( 6, 2 ) ); setFont(new Font("SansSerif",Font.BOLD,10)); // create the components of the Frame add( new Label( "Account Number" ) ); accountField = new TextField(); add( accountField ); add( new Label( "First Name" ) ); firstNameField = new TextField( 20 ); add( firstNameField ); add( new Label( "Last Name" ) ); lastNameField = new TextField( 20 ); add( lastNameField ); add( new Label( "Balance" ) ); balanceField = new TextField( 20 ); add( balanceField ); add( new Label( "Socsec" ) ); socsecField = new TextField( 20 ); add( socsecField ); enter = new Button( "Enter" ); enter.addActionListener( this ); add( enter ); done = new Button( "Done" ); done.addActionListener( this ); add( done ); setVisible( true ); } public void addRecord() { int accountNumber = 0; Double d; if ( ! accountField.getText().equals( "" ) ) { // output the values to the file try { accountNumber = Integer.parseInt( accountField.getText() ); if ( accountNumber > 0 && accountNumber <= 100 ) { data.setAccount( accountNumber ); data.setFirstName( firstNameField.getText() ); data.setLastName( lastNameField.getText() ); d = new Double ( balanceField.getText() ); data.setBalance( d.doubleValue() ); data.setSocsec( socsecField.getText() ); output.seek( (long) ( accountNumber-1 ) * Record.size() ); data.write( output ); } // clear the TextFields accountField.setText( "" ); firstNameField.setText( "" ); lastNameField.setText( "" ); balanceField.setText( "" ); socsecField.setText(""); } catch ( NumberFormatException nfe ) { System.err.println( "You must enter an integer account number" ); } catch ( IOException io ) { System.err.println( "Error during write to file\n" + io.toString() ); System.exit( 1 ); } } } public void actionPerformed( ActionEvent e ) { if ( e.getSource() == enter ) addRecord(); if ( e.getSource() == done ) { try { output.close(); } catch ( IOException io ) { System.err.println( "File not closed properly\n" + io.toString() ); System.exit( 1 ); } System.exit( 0 ); } } // Instantiate a WriteRandomFile object and start the program public static void main( String args[] ) { new WriteRandomFile(); } } Readrandomfile2.java // This program reads a random access file sequentially and // displays the contents one record at a time in text fields. import java.io.*; import java.awt.*; import java.awt.event.*; import java.text.DecimalFormat; public class ReadRandomFile2 extends Frame implements ActionListener { // TextFields to display account number, first name, // last name and balance. private TextField accountField, firstNameField, lastNameField, balanceField, socsecField; private Button next, // get next record in file done; // quit program // Application other pieces private RandomAccessFile input; private Record data; // Constructor -- initialize the Frame public ReadRandomFile2() { super( "Read Client File" ); // Open the file try { input = new RandomAccessFile( "credit.dat", "r" ); } catch ( IOException e ) { System.err.println( e.toString() ); System.exit( 1 ); } data = new Record(); setSize( 300, 150 ); setLayout( new GridLayout( 6, 2 ) ); // create the components of the Frame add( new Label( "Account Number" ) ); accountField = new TextField(); accountField.setEditable( false ); add( accountField ); add( new Label( "First Name" ) ); firstNameField = new TextField( 20 ); firstNameField.setEditable( false ); add( firstNameField ); add( new Label( "Last Name" ) ); lastNameField = new TextField( 20 ); lastNameField.setEditable( false ); add( lastNameField ); add( new Label( "Balance" ) ); balanceField = new TextField( 20 ); balanceField.setEditable( false ); add( balanceField ); add( new Label( "Socsec" ) ); socsecField = new TextField( 20 ); socsecField.setEditable( false ); add( socsecField ); next = new Button( "Next" ); next.addActionListener( this ); add( next ); done = new Button( "Done" ); done.addActionListener( this ); add( done ); setVisible( true ); } public void actionPerformed( ActionEvent e ) { if ( e.getSource() == next ) readRecord(); else closeFile(); } public void readRecord() { DecimalFormat twoDigits = new DecimalFormat( "0.00" ); // read a record and display try { do { data.read( input ); } while ( data.getAccount() == 0 ); accountField.setText( String.valueOf( data.getAccount() ) ); firstNameField.setText( data.getFirstName() ); lastNameField.setText( data.getLastName() ); balanceField.setText( String.valueOf( twoDigits.format ( data.getBalance() ) ) ); socsecField.setText( data.getSocsec() ); } catch ( EOFException eof ) { closeFile(); } catch ( IOException e ) { System.err.println( "Error during read from file\n" + e.toString() ); System.exit( 1 ); } } private void closeFile() { try { input.close(); System.exit( 0 ); } catch ( IOException e ) { System.err.println( "Error closing file\n" + e.toString() ); System.exit( 1 ); } } // Instantiate a ReadRandomFile object and start the program public static void main( String args[] ) { new ReadRandomFile2(); } } • Show less0 answers - UserName askedWrite a program in C++ that uses nodes and single linked lists to add the sum of the digits in an in... More »1 answer - Anonymous askedConsider a modification to the activity selection problem in which each activity ai has, in addition... Show moreConsider a modification to the activity selection problem in which each activity ai has, in addition to a start and finish time, a value vi. The objective is no longer to maximize the number of activities scheduled, but instead to maximize the total value of the activities scheduled. That is, we wish to choose a set A of compatible activities such that is maximized. Give a polynomial-time algorithm for this problem. • Show less0 answers - Anonymous askedModern computers use a cache to store a small amount of data in fast memory. Even though a program m... Show moreModern computers use a cache to store a small amount of data in fast memory. Even though a program may access large amounts of data, by storing a small subset of the main memory in the cache -- a small but faster memory – overall access time can greatly decrease. When a computer program executes, it makes a sequence <r1, r2, … rn> of n memory requests, where each request is for a particular data element. For instance, a program that accesses four distinct elements {a,b,c,d} might make the sequence of requests <d, b, d, b, d, a, c, d, b, a, c, b>. Let k be the size of the cache. When the cache contains k elements and the program requests the (k+1)st element, the system must decide, for this and each subsequent request, which k elements to keep in the cache. More precisely, for each request ri, the cache-management algorithm checks whether the element ri is already in the cache. If it is, then we have a cache hit; otherwise we have a cache miss. Upon a cache miss, the system retrieves ri from main memory, and the cache management system must decide whether to keep ri in the cache. If it decides to keep ri and the cache already holds k elements, then it must evict one element to make room for ri. furthest-in-future strategy. The input should be a sequence <r1, r2, … rn> the furthest-in-future produces the minimum possible number of cache misses. • Show less0 answers - Anonymous asked0 answers - Anonymous askedThe program should... Show moreWrite program to input for each customer : account number, the beginning balance. The program should contain three functions: 1) Deposit ? Deposit an amount of money. 2) Withdraw ?Withdraw an amount of money. 3) Print ? Print all information after updates . It should calculate and print the new balance according to selected function: Your program sholud have 2 menus: First Menu: 1- New customer 2- Exit *********************************** Sub menu: 3- Deposit an amount of money 4- Withdraw an amount of money 5- Display all information with new balance 6- Exit *********************************** In your program : • The inner loop continues till user selects Exit ,then the user will back to the first menu. • Show less1 answer - Anonymous asked3 answers - Anonymous askedWrite a program that uses a stack to print the prime factors of a positive integer in decending ord... Show more Write a program that uses a stack to print the prime factors of a positive integer in decending order. • Show less1 answer - Anonymous asked Write a program that reads a line of text, changes each uppercase letter to lowercase, and places e... Show more Write a program that reads a line of text, changes each uppercase letter to lowercase, and places each letters both in a queue and onto a stack. The program should them verify whether the line of texts is a palindrome. • Show less1 answer - Anonymous asked1. Develop a mathematical model for the circuit in the form of differential equations. 2. Find the i1 answer - Anonymous asked1. Initialize the array... Show moreI have to write a C++ program that declares an array alpha of 50 components. 1. Initialize the array to zero 2. Read the info into the array from a file array.txt 2a. And have it output into a file outarray.txt 3. Print the array 4. Sum all of the components of the array 5. Find the smallest number in the array 6. Find the largest number in the array 7. Find the average of the numbers Help please... • Show less1 answer - Anonymous asked0 answers - Anonymous askeds current donation database. An output file needs to be c... Show moreTo convert the data from your organization’s current donation database. An output file needs to be created in your new system’s format. Scenario You have been assigned the task to convert the data from your organization’s current donation database. An output file needs to be created in your new system’s format. Unfortunately, the organization that developed the old system has gone out of business and you have no documentation available. Your new system does not have extensive documentation either. Input File ? The input file is named expinput.txt ? The only thing resembling documentation is the user input screens bellow ? You will need to look at the file and determine how it matches the screens Output File ? File should be named expoutput.txt ? The input file has fields that are not in the output file (these should be ignored) ? The output file has fields that do not exist in the input file (these will require calculations) Requirements ? The program must be written in Java using Netbeans ? The program must produce the output file as described in this documentation ? The program must use at least three classes that are written by the student and an array. ? The program must allow the user to verify the file by allowing them to search the array for specific users. The results will display the information for that user. Old System – Screen Prints Contributor: 1172864 Name: Patricia Collins Phone: (111)555-3522 Address: 56 Fourth Street Riverside, IN 14089 Donations Volunteering 12/06/03 $ 24.00 12/08/01 4 hours 08/11/03 $ 223.00 07/16/01 $ 489.00 Total $ 736.00 Total 4 Contributor: 1278222 Name: Michael Kroll Phone: (111)555-3838 Address: 707 Seventh Street Portland, OK 25747 Donations Volunteering 10/09/04 $ 42.00 12/09/02 3 hours 03/26/03 $ 162.00 38270 4 hours 12/04/04 $ 395.00 Total $ 599.00 Total 7 Contributor: 1334969 Name: Scott Marecek Phone: (111)555-4008 Address: 14 Main Street Georgetown, WI 30092 Donations Volunteering 11/06/02 $ 247.00 07/23/00 3 hours 12/27/01 $ 323.00 37988 1 hours 03/20/02 $ 379.00 Total $ 949.00 Total 4 New System – File Layout ? Field 1 Contributor 7 digits 9999999 ? Field 2 First Name 15 characters XXXXXXXXXXXXXXX ? Field 3 Last Name 15 characters XXXXXXXXXXXXXXX ? Field 4 Donations 2 digits 99 ? Field 5 Total Donations 5.2 number 99999.99 ? Field 6 Volunteer Hours 3 digits 999 5 10 15 20 25 30 35 40 45 50 55 65 75 80 ----|----|----|----|----|----|----|----|----|----|----|----|----|----| 1172864Patricia Collins 0300736.00004 1278222Michael Kroll 0300599.00007 1334969Scott Marecek 0300949.00004 Expinput.txt: 172864,Collins,Patricia ,1115553522,PaCollins@asknow.tv,56 Fourth Street ,Riverside,IN,14089,07/07/01,PC2-22,"ASC%$%FFFR",C,Y,100.00 1172864,12/06/03,Y,24.00,C,M,50.00 1172864,08/11/03,Y,223.00,M,P,100.00 1172864,07/16/01,M,489.00,M,P,50.00 1172864,12/08/01,4,M,M,M,100.00 1278222,Kroll,Michael ,1115553838,MiKroll@thecorpt.net,707 Seventh Street ,Portland,OK,25747,01/16/03,SG6-62,")8¦?¦d ?ƒzL?",M,M,50.00 1278222,10/09/04,Y,42.00,M,M,25.00 1278222,03/26/03,Y,162.00,Y,M,25.00 1278222,12/04/04,Y,395.00,M,P,50.00 1278222,12/09/02,3,L,V,Y,20.00 1278222,10/10/04,4,L,Y,Y,25.00 1334969,Marecek,Scott ,1115554008,ScMarecek@yesterday.org,14 Main Street ,Georgetown,WI,30092,04/23/03,KG2-19,",-væ?w??*a~ñ9,?",M,M,50.00 1334969,11/06/02,M,247.00,M,P,20.00 1334969,12/27/01,Y,323.00,M,M,50.00 1334969,03/20/02,Y,379.00,M,P,100.00 1334969,07/23/00,3,L,C,M,25.00 1334969,01/02/04,1,C,M,M,100.00 1100699,Palmer,Barry ,1115553305,BaPalmer@asknow.tv,95 Fifth Street ,Springfield,OK,25747,08/15/02,NC3-2, ?xk-\?*<??8??,Y,M,25.00 1100701,10/06/04,Y,60.00,V,M,20.00 1100701,12/02/02,Y,287.00,Y,M,20.00 1100701,04/07/04,Y,316.00,C,M,50.00 1100701,05/15/01,Y,334.00,M,M,20.00 1100701,11/12/03,Y,405.00,M,M,50.00 1100701,04/09/00,Y,451.00,C,M,100.00 1100701,11/26/03,5,M,M,P,25.00 1107461,Sona,Matt ,1115553325,MaSona@yesterday.org,441 First Street ,Washington,OK,16623,09/15/04,PC1-72, 6 ??™?9XE ?G§,V,Y,20.00 1107461,10/25/02,Y,407.00,V,P,25.00 1107461,04/21/03,Y,461.00,M,M,25.00 1107461,12/03/04,4,S,M,M,25.00 1113432,LaCrosse,Frank ,1115553343,FrLaCrosse@loa.com,1595 Cedar Blvd. ,Springfield,TX,22054,08/19/03,PC1-31," X,6; ??1?¥P ??",C,M,50.00 1113432,12/14/04,M,196.00,C,Y,100.00 1120328,06/08/04,Y,75.00,M,Y,100.00 1120328,06/01/02,M,130.00,M,P,20.00 1120328,07/27/00,2,M,V,M,25.00 1120328,07/21/01,4,S,M,M,100.00 1128221,Hovie,Stephen ,1115553388,StHovie@yesterday.org,1595 Cedar Blvd. ,Midway,WI,18956,12/23/02,PC1-30,"D?5U 4?MZ?""8ƒ",M,M,50.00 1128221,12/19/01,Y,240.00,V,Y,20.00 1128221,02/06/00,5,L,M,M,20.00 1142212,Hackel,Joseph ,1115553430,JoHackel@googol.org,2862 Lake Ave. ,Lincoln,TX,22054,01/19/02,PC2-51," ?A[)Š+,H?6",M,M,50.00 1142212,09/10/02,M,370.00,V,M,50.00 1144816,Myaskovsky,Jennifer ,1115553437,JeMyaskovsky@loa.com,103 View Drive ,Lincoln,WI,30092,01/23/00,PC1-59,"¨rS™?{¦ %1",V,Y,100.00 1144816,04/19/00,Y,180.00,M,M,50.00 1144816,12/02/02,1,L,C,M,25.00 1146249,Wood,Jesus ,1115553442,JeWood@thecorpt.net,1092 Sixth Street ,Beaver Dam,IL,81248,12/03/04,PC1-25, !??? p[?¬;?’D4,C,M,25.00 1146249,05/08/01,Y,488.00,M,P,50.00 1146249,09/06/03,1,L,Y,M,25.00 1157531,Scott,Rhonda ,1115553476,RhScott@thecorpt.net,6687 Fourth Street ,Fairview,OK,24473,02/16/03,NV1-1, !o b???\f?Ž6.,V,Y,20.00 1157531,03/19/03,M,66.00,V,M,25.00 1157531,01/25/04,M,148.00,M,Y,25.00 1157531,03/01/00,M,290.00,M,M,50.00 1157531,10/12/03,M,398.00,Y,M,50.00 1157531,05/25/03,M,404.00,M,M,25.00 1157531,02/15/01,1,S,M,P,50.00 1157531,07/02/03,2,M,C,M,25.00 1158197,Tsevas,Michael ,1115553478,MiTsevas@asknow.tv,151 Washington Rd. ,Lincoln,IN,14089,11/07/04,SG6-8,!Slt?g?x.X?› n?R,M,M,50.00 1158197,06/09/00,Y,154.00,M,M,100.00 1158197,09/20/04,Y,361.00,M,M,25.00 1158197,02/20/01,5,S,M,M,20.00 1166024,Legge,Ann ,1115553501,AnLegge@asknow.tv,15 Elm Street ,Greenwood,IN,11174,03/07/03,TD12B-4,!t( - %16KX!%? ,M,Y,50.00 1166024,10/01/02,Y,407.00,M,M,25.00 1171623,Smith,Ronald ,1115553518,RoSmith@googol.org,2434 Odonym Ave. ,Lincoln,IN,11104,04/07/02,TD12B-17,"ASC%!@FFFR",V,P,100.00 1171623,09/10/02,Y,61.00,C,M,20.00 1171623,07/12/02,Y,130.00,M,M,20.00 1171623,11/01/03,M,132.00,M,M,50.00 1171623,08/10/01,M,180.00,M,M,50.00 1171623,01/16/03,Y,265.00,Y,M,25.00 1171623,07/23/00,Y,393.00,Y,M,50.00 1171623,02/05/04,1,M,M,P,100.00 • Show less0 answers - Anonymous askedTextbook: Computer Organization and Design, by Patterson and Hennessy, 4th ed. I am working on the f... Show moreTextbook: Computer Organization and Design, by Patterson and Hennessy, 4th ed. I am working on the following problem and I am lost on what the question is exactly asking for. Thank you for any assistance. • Show less1 answer - Anonymous asked1 answer - Anonymous askedWindriv... Show moreMy operating system is Windows XP Professional. I invoke "clearmake lib" from command prompt. Windriver 5.8.0.0 compiler does run. This compiler tries to open some gcc parser configuration file but it cannot find the file because path to this file is incorrect as follows: GNU options parser fatal error: config file wasn't found: $(CC_ROOT)\conf\gcc_parser.conf In the makefile, CC_ROOT is defined as follows: CC_ROOT=$(CC_HOME)$/diab$/5.8.0.0 CC_HOME is defined as follows: CC_HOME = \folder1l\folder2\environment\WindRiver The makefile also exports variables need by WindRiver Compiler as follows: export DIABLIB=$(CC_ROOT) export WIND_HOME=$(CC_HOME) export WIND_PREFERRED_PACKAGES=diab-5.8.0.0 export WIND_DIAB_PATH=$(CC_HOME)\diab\5.8.0.0 When I echo macros DIABLIB and CC_ROOT, I see they are replaced by their values, which is the path. Then, why does $(CC_ROOT) appears in the path to file above? Why doesn't it's value appear in the path? If I export DIABLIB without a macro as follows: export DIABLIB=\folder1\folder2\environment\WindRiver\diab\5.8.0.0 then, compilation is successful. • Show less1 answer - Anonymous asked1 answer - Anonymous asked1 answer - Anonymous askedSuppose three active nodes -- A, B and C -- are competing for access to a channel using slotted ALOH... Show moreSuppose three active nodes -- A, B and C -- are competing for access to a channel using slotted ALOHA. Assume each node has an infinite number of packets to send. Each node attempts to transmit in each slot with probability p = 0.2. The first slot is numbered slot 1, the second slot is numbered slot 2, an so on. a) What is the probability, p(A), that node A succeeds in a slot? b) What is the probability that node A succeeds for the first time in slot 4? c) What is the probability that some node (either A, B or C) succeeds in slot 2? d) What is the efficiency of this three-node system? • Show less2 answers - Anonymous asked1 answer - Anonymous asked1. Write a program that reads a 2 dimensional array of R rows and C columns and finds the largest nu3 answers - Anonymous askedcid and pre_c... Show moreAssuming that there are 2 relations course (cid,cname,level) and demand (cid, pre_cid) cid and pre_cid are foriegn keys relate to primary key cid in course relation how can I query in datalog following queries: 1)Get lis of all names of pre courses (not only direct) to course name 'algorithms' 2)Per course print its name and number of courses which are pre requisities for it. - pairs , if no pre-req print couse name and 0 • Show less0 answers - GustyBrick1058 askedRead a disk file named SalesResult.txt that contains a list of sales amounts. See a sample of the fi... Show moreRead a disk file named SalesResult.txt that contains a list of sales amounts. See a sample of the file below. Use a one-dimensional array to solve the following problem: A company pays its sales people on a commission basis. The sales people receive $200 per week, plus 9% of their gross sales for that week. For example, a sales person who grosses $5000 in sales in a week received $200 plus 9% of $5000, or a total of $650. Write a program (using an array of counters) that reads the data file and displays ------------------------------------------------------------------------------- Sample data file 4385 8158 3684 1150 4710 3995 8311 602 7285 3632 3975 3139 1165 • Show less1 answer - Anonymous askedSuppose that u have a linear array X of 8 integer element . Write an algorithm to change the all eve... More »1 answer - SmellyCode askedWrite a PROC that receives in esi the offset of an array and receives in ebx the actual number of el... Show moreWrite a PROC that receives in esi the offset of an array and receives in ebx the actual number of elements in the array. (Note: The space allocated for an array is not necessarily the numbers of elements being utilized in the array.) The PROC should sort the array into ascending order. You may use either the bubble sort or the selection sort for this. The elements should be signed double words. Then write a PROC that receives in esi the offset of an array of double words. The PROC should ask the user for the number of elements to be entered and then populate the array with that number of elements obtained from the user. The PROC should return the number of elements in the eax register. Write another PROC that receives in esi the offset of an array of double words and in ebx the actual number of elements in the array and then displays the array, one element per line. Finally, write a main PROC that reserves space for an array of signed double words that can be as large as 100 numbers. Utilize the PROCs above to obtain the numbers from the user and place them in the array, write out the original array with appropriate labels, sort the array, and then display the sorted array with appropriate labels. Note: The labels will have to be displayed by main, but the elements should be displayed by the appropriate PROC. • Show less0 answers - SmellyCode askedarray1: 14,70,5... Show moreComparisons Write a program that contains the following array and the following PROCs: array1: 14,70,50,40,80,85,72,88,25,60,74,90,94,79,81,63,97,78,65,82,74,69,70,75,45 MAIN: This should simply set up appropriate registers with the information needed by the other PROCs and then call them as needed to produce the results. After value-returning PROCs are called, MAIN should store the values in appropriately named memory locations. Don't assume that array1 will have a fixed number of elements; that is, don't assume that the array has 25 elements -- your program should work properly if you simply go in and edit array1 to have different elements or a different number of elements. LARGE: This PROC should be passed the location of the array and the number of elements in the array in appropriate registers, and it should return the largest value in the array in the EAX register. SMALL: Same as LARGE, but the value returned should be the smallest value in the array. SUM: Same as LARGE, but the value returned should be the sum of the elements in the array. DISPL: This procedure should be passed, in a register, the location of the first of the three numbers produced by the latter three procedures and it should display all three results, with appropriate labels. Note: This requires the largest, smallest and sum to be stored in contiguous data locations. Make sure the program and all procedures (except MAIN -- it's documentation is the program documentation) have appropriate documentation at the beginning, as well as line by line to explain what is going on. • Show less1 answer - Anonymous askedIn this representat... Show moreb) Consider a stack-based IR with three operations: “add,” “mult,” and “push c.” In this representation, “1*2+3” becomes Operation Stack push 1 1 push 2 1 2 mult 2 push 3 2 3 add 5 i. Following the usual rules for precedence and associativity, translate the expression “1+2*3+4” into seven operations in the IR and show the state of the stack after each operation. Operation Stack ii. Rearrange the terms in the expression so that it uses less stack space, translate the expression into the IR (still seven operations), and show the state of the stack after each. Operation Stack Rearranged Expression • Show less0 answers - Anonymous askedI have to find the integral of a function by using the Simpson's Rule. First I gotta use the basic f... Show more I have to find the integral of a function by using the Simpson's Rule. First I gotta use the basic function fx=5.1·sin(2.3 x) at 0.2 and 1.4 for n=10 , 20, 40. After finding the solution for this program, I should find the integral of n must an even number, I did make a do-while loop for that but I am not positive about its correction.• Show less I did work a little bit.But when I compile it, I get lots of error messages.Here what I got so far, // Finding the integrals by using the Simpson's Rule #include <iostream> #include <cmath> #include <iomanip> using namespace std; void integrand (doublex); int main () { int n; // number of integration intervals double fa,fb,h,a,b,sum1,sum2,i,xi,fx; //a=value of x @ lower integr. limit, b=value of x @ upper integr. limit, h = (b-a)/n = width of individual interval, xi = a + h*i for i=0 to i=n, f(xi) = value of function at x=xi cout << "Give the upper limit, b: "; cin >> b ; cout << "Give the lower limit, a: "; cin >> a ; cout << "Number of integration intervals(must be even), n: "; cin >> n ; do { cout <<"n must be even"; cin >> n; }while (2*(n/2)==n) fa=5.1*sin(2.3*a); fb=5.1*sin(2.3*b); sum1=0; sum2=0; h=(b-a)/n; sum1=0; for (i=1; i<=n-1; i=i+2) { xi=a+(h*i); sum1=sum1+integrand(xi); } for (i=2; i<=n-2; i=i+2) { xi=a+(h*i); sum2=sum2+integrand(xi); } } double integrand doublex ; { double fx; fx=5.1*sin(2.3*x); return fx; }0 answers - Anonymous asked To Rapunzel: I copied and pasted the incorrect coment in to the rating. What I ment to say was: To Rapunzel: I copied and pasted the incorrect coment in to the rating. What I ment to say was: That is one of the best explanations I have gotten from Cramster. I had problems with it initially because I forgot to add #include. Obviously, that was my fault. People like you make Cramster worth the expense. I finally understand this chapter. I wish you the best. The site wouldn't let me update the comment after I posted it. I assure you, no disrespect was meant. I was writing my responses in word (spell check helps you not look special) and copying them over. I apparently copied the wrong one. Thank you very much for your time and effort. Here is the Programming Challenge:• Show less shoud also be displayed. Input Validation: Do not accept negative values for players' numbers or points scored. This is what was given as a solution in cramster. If you enter a first and last name it causes an infinite loop, it does not display the correct information, and does not have the input validation. Please rework this so that it includes clear explinations for the steps and values with meaningful names (no letters please). I want to understand this which is why I'm asking for all of the detail. The following solution was given by Cramster. Please rework the solution so that the following are included. The problem specifies that input validation is required so that NO negative numbers can be entered. It needs to also show a table that lists each player’s number, name, and points scored. If you enter a first and last name it causes an infinite loop. The solution needs to accept and display first and last names. I think this requires cin.getline but I'm not sure how to do that in this program. Please use meaningful variable names and not letters. The letters i and j are used in this and I'm not completely clear how it's being indexed. I am teaching myself so this is my only instructional resource. Thaks for your time. #include #include using namespace std; // Structure Decloration struct soccerPlayer {string name; int number; int points;}; void main() { //Variable Declorations soccerPlayer players[12]; int totalPoints = 0; // Request information for each player cout << "Enter information for each player:" << endl; for (int i=0; i<12; i++) { do { // Do while loop for input validation // While loop repeats untill non negative input cout << "Enter Player Name:"; cin >> players[i].name; cout << "Enter Player Number:"; cin >> players[i].number; cout << "Enter Player Points:"; cin >> players[i].points; } while (players[i].number < 0 || players[i].points <0); } // Calculating total points for team for (int i=0; i<12; i++) { totalPoints = totalPoints=players[i].points; } cout << "total points in a Team:" << totalPoints << endl; int highPoints, j=0; highPoints = players[0].points; for (int i=1; i<12; i++) { if (players[i].points > highPoints) { highPoints = players[i].points; j=i; } cout << "Top points player:" << endl; cout << "Player:" << players [j].name << endl; cout << "Player Number:" << players[j].number << endl; cout << "Player Points:" << highPoints << endl; system("pause"); } }2 answers - Anonymous asked1 answer - AlmostDone23 askedDevelope with clear explanations and reasoning a 2-bit subtractor, namely for the 15th and 16th bit ... More »1 answer - Anonymous askedI have a file named "text.txt" The text that is in the file is stored as one sentence per line. I ne... Show more I have a file named "text.txt" The text that is in the file is stored as one sentence per line. I need to write a program that reads the file's contents and calculates the average number of words per sentence.• Show less1 answer - Anonymous askedIt looks like the answer to question (b) is 22 and for (c) is more than 11. There is no justificati... More »0 answers - Anonymous asked0, and determine its time ef... Show more Design a decrease-by-constant-factor algorithm for computing where n > 0, and determine its time efficiency. Explain why this is a decrease-by-constant-factor algorithm, and specify the factor. The response by Ebenezer isn't correct for computing the above problem. Nor is the constant factor supplied or the time efficiency. Thank you for trying though.• Show less1 answer - Anonymous asked482... Show morethis is for c programming four faculty name current yearly income raise percentage 58147.75 5.7% 48284.36 4.9% 54335.66 6.6% 79251.77 3.8% I NEED HELP WITH THE FIRST PART THANK YOU • Show less1 answer - tejdhar askeda. Draw a finite automaton (nondeterministic is allowed ) to recognize th... Show moreFor the R.E. (10+111+110)*: a. Draw a finite automaton (nondeterministic is allowed ) to recognize the language of the RE. b. Use the subset method to produce a deterministic automaton. • Show less1 answer - tejdhar askeda. The language of strings in {a,b,c}* th... Show moreDraw finite automata to recognize the following languages: a. The language of strings in {a,b,c}* that don't contain the sub-string bac. b. The language of strings in {a,b} that contain both a sub-string ab and a sub-string ba. • Show less1 answer - Anonymous asked0 answers - Anonymous asked2 answers - Anonymous asked1 answer
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2010-december-13
CC-MAIN-2014-42
refinedweb
7,280
57.37
Big Data on a Laptop: Tools and Strategies - Part 2. We found an easy way to query CSV data, but performance degraded quickly as data size increased and memory requirements became prohibitive. Today, we’ll look at Parquet, a columnar file format that is optimized for space and makes it easier and much faster to extract subsets of the data. Part 2 is a Bit of a Slog… Before we start, I want to give notice that this post is going to be a bit dense. Not only is it unusually long, it’s also extra heavy on the technical detail. The technologies we’re talking about today were originally intended to be used within the Apache Spark cluster computing environment. When used in that manner, a lot of the detail is abstracted away under the hood. Today, we’re going to be using some of these components in a standalone laptop environment instead, which requires that we get our hands pretty dirty. We’re going to be showing some pretty detailed operations, both in C++ and Python. They’re not for everyone, but for those who are proficient with C++ or Python and need to work with big data on small machines, the techniques we’ll present today can be transformational for your workflow. For everyone else, stay tuned for Part 3. That’s when we’ll show how to run Spark on your local machine, which hides a lot of the detail we’re covering today. Save Space and Time with Columnar Data Formats Recall that we were working with a large commuting patterns dataset with 82 variables in it. If we can avoid loading the entire example dataset into memory, we’d be able to analyze much larger datasets given the same hardware. For instance, in the last query from the earlier post, we only needed to examine four columns out of 82. Columnar file formats address this use case head-on by organizing data into columns rather than rows. Such a structure makes reading in only the required data much faster. In simple terms at the low level, a reader seeks to the start of a column’s worth of data in the data file, then reads until the indicated end location of that column in the file. Columns of data not requested never get read, saving time. To make reads even faster, and to save space on disk, columnar formats typically use data compression on a per-column basis, which often makes the compression algorithms more effective than with row-based data while preserving relatively high-speed decompression. Parquet Parquet is a columnar storage format for data used in Apache products like Spark, Drill and many others. It compresses to impressively high ratios while enabling super fast analytic queries. In many cases you win both in huge storage savings and dramatically lower query times. For the following examples consider Parquet files as extremely high performance, strongly typed versions of CSV files. It’s also helpful to be aware of Apache Arrow. Apache developed Parquet to be a columnar storage format. Concurrently, they developed a product called Arrow which is a cross-language platform for columnar in-memory data. Parquet and Arrow are meant to work together to create a highly performant and efficient data layer. You’ll see some references to Arrow in the Python examples later on. (Aside: I’ve written more extensively about Arrow on my personal blog). Here I’ll demonstrate use of the Parquet data format in a command-line tools setting, using both C++ and Python. While typically coupled with big data tools, you can also use Parquet libraries with your own code to build a command-line data analysis workflow. While we saw in Part 1 that the size of my example dataset is just on the edge of what’s possible to fit in a typical laptop’s memory, we can still benefit with performance and space savings by converting to Parquet. If the data were ten times larger (easily possible if all available IPUMS U.S. Census data were used) a columnar, compressed format like Parquet would be absolutely necessary to work with it on a laptop or desktop. parquet-cpp - A C++ Library for Parquet Data Thanks to the parquet-cpp project you don’t need to set up a big data execution framework like Spark just to use Parquet. The following examples achieve good performance by direct use of the C++ Parquet library. (There are Parquet libraries for other languages too, including Go and Java. There’s also Parquet library support for Python, which we will explore later in this article.) Memory use by parquet-cpp will be exactly proportional to the number of columns involved in a query. Some queries can be optimized further by aggregating data as columns are read in (averages, counts) while others require complete column data to compute (e.g. medians). With 17,000,000 rows and four columns our example will require at most 259 megabytes (17000000 * 4 columns * 32 bit number types). This means we could work with a Parquet dataset with up to around 525,000,000 rows on an 8GB laptop, larger than the entire population of the U.S., before even attempting to optimize for memory use. Converting Data to Parquet with C++ The first thing we need is a way to create Parquet data. To create Parquet-formatted data, you can write your own utility using one of these Parquet language libraries. C++ will be most efficient both with memory and CPU usage but perhaps another language will be more convenient depending on your preferred language. For work here at IPUMS I have written a stand-alone tool, make-parquet, in C++ for converting either CSV or fixed-width data (another of the IPUMS data formats) to Parquet format. It’s very fast and minimizes memory use; you can easily run it on your laptop. Below is a bit of what the make-parquet program looks like. When I began writing C++ tools to handle Parquet-formatted data the low-level API was the only interface to the library, so that’s what I used to make make-parquet. In essence the parquet-cpp library gives you: - parquet types to group together into a schema - parquet::FileReader - parquet::FileWriter - parquet::RowGroupReader - parquet::RowGroupWriter - parquet::ColumnReader - parquet::ColumnWriter You call ColumnReader::ReadBatch() and ColumnWriter::WriteBatch() to actually move data in and out of Parquet files; compression gets handled by the library as well as buffering. Once you’ve extracted data from a data source, say a CSV or fixed width text file, the core of the “make-parquet” program looks like: // Handles int32 and string types; you could extend to handle floats and larger ints. by // adding and handling additional types of buffers. // // The new high level interface supports a variant type that would allow you to pass all data // buffers as a single argument... // // To avoid one argument per data type, you could instead defer conversion from raw string // data until right before sending to WriteBatch(). However this means that buffering a single // untyped, optimally-sized row group in RAM requires much more space; perhaps four to five times as much. // You'd soon run out of memory before running out of CPU cores on most systems... // //); } } } You can see this is definitely a low-level API. The newer high-level API (not discussed here today - it wasn’t available when I was writing make-parquet) would make this process much friendlier. Nevertheless, it may be helpful to see how the data is converted to Parquet. The conversion process takes around five minutes with my make-parquet utility for the example usa_00065.csv dataset. Here’s the dataset from the last section before and after converting to Parquet: $ ls -sh usa_00065.csv 4.2G usa_00065.csv After converting usa_00065.csv to a parquet version named usa_00065.parquet using default “snappy” compression: $ ls -sh usa_00065.parquet 688M extract65.parquet See how small it got? That’s just one of the benefits. Watch how fast it goes. Working with Parquet Data in C++ First, going back to our q example that used CSV data: $ time q -d, -b -O -D"|" -H \ "select YEAR, \ sum(case when TRANWORK>0 then PERWT else 0 end) as trav_total, \ sum(case when TRANWORK=40 then PERWT else 0 end) as bikers_total \ from ./usa_00065.csv \ where OCC2010<9900 and YEAR > 1970 \ group by YEAR" YEAR|trav_total|bikers_walkers_total 1980|94722000 |5385300 1990|114747377 |4931475 2000|128268633 |4244613 2010|137026072 |4515708 2016|150439303 |4932296 real 24m38.919s user 24m18.680s sys 0m20.048s We recall this took almost 25 minutes. Now let’s see how Parquet can dramatically improve upon this. But before we get there, we have to solve a new data format problem we just created. Recall that the q program needs CSV-formatted data as input. Now that we’ve converted to parquet, we’ll need to do something to get back to CSV first before we can use q. That something in my case is a custom utility I wrote called tabulate_pq. tabulate_pq - a C++ Parquet tabulator Note: if C++ isn’t your thing, hang tight. I’ll eventually explain how to do this in Python, too. tabulate_pq is a C++ program that demonstrates one simple but powerful use of a specialized Parquet reader: weighted cross-tabulations. The program takes a Parquet file, a weight variable (or ‘1’ if no weight variable is present, indicating each record refers to an actual case), and up to three column names to cross-tabulate. For readers who are not familiar with cross-tabulations, the concept is straightforward: if you are cross-tabulating two variables, say AGE and SEX, you’ll end up with a value (aka “cell”) for each possible combination that represents the count of records that have that combination: for instance, there will be a value for “1 year old males”, another for “1 year old females”, a third for “2 year old males”, and so on, for every combination of age and sex. tabulate_pq computes these cross-tab cells and then outputs the result as a CSV-formatted table to standard out. Let’s see an example. $ tabulate_pq ./extract65.parquet YEAR OWNERSHP YEAR,OWNERSHP,total 1960,0,4903864 1960,1,112060391 1960,2,62328797 1970,0,5823600 1970,1,131963200 ..... Here we have a YEAR variable (representing census year) and an OWNERSHP variable which has three possible values for the person’s home ownership status: 0 = N/A, 1 = owned, and 2 = rented. Here in tabulate_pq’s CSV output we can see that for example that in 1960, 112 million people lived in a home owned by themselves or a relative. There’s more to tabulate_pq than we show here in this code snippet, but to give you an idea of how tabulate_pq leverages the parquet-cpp library, here’s a sample of the core of the program: // Suppose we can determine the column indices to // extract by matching column names in the schema // with their positions.... vector<int> columns_to_tabulate = get_from_schema("{"PERWT",AGE","MARST"}); // Can extract columns in parallel reader->set_num_threads(4); int rg = reader->num_row_groups(); // You can read into a table just a group of rows // rather than the entire Parquet file... for(int group_num=0;group_num<rg;group_num++){ std::shared_ptr<arrow::Table> table; reader->ReadRowGroup(rg, columns_to_tabulate, &table); auto rows = table->num_rows(); vector<const); } } The concept of row groups is important. If you’re memory constrained you may need to read in one row group worth of a column at a time as we do in the example above (these are known as column chuncks.) This way you can read in part of a column, deal with the data by performing some reduce operation and dispose of the memory before moving on to the next row group. tabulate_pq is a useful utility in it’s own right, but remember that we were doing this so that we could get back to CSV input for our q CSV querying program. q Revisited Now that we have a way to get Parquet data back into CSV format, let’s see how we can query Parquet data with q. Here’s our 25-minute example, this time starting from Parquet data and using our tabulate_pq program as a helper to make the CSV data for q. $ time ./tabulate_pq ./usa_00065.parquet PERWT YEAR TRANWORK OCC2010 | \ q -d, -b -O -D"|" -H "select YEAR, \ sum(case when TRANWORK>0 then total else 0 end) trav_total, \ sum(case when TRANWORK=40 then total else 0 end) bikers_total \ from - \ where OCC2010<9900 and YEAR > 1970 \ group by YEAR" ** Diagnostic output of tabulate_parquet ** num_rows: 17634469 file has 8 row groups. Data extract took: 1.437 seconds. Cross tab computation took: 1.027 seconds. ** Output of q ** YEAR|trav_total|bikers_walkers_total 1980|94722000 |5385300 1990|114747377|4931475 2000|128268633|4244613 2010|137026072|4515708 2016|150439303|4932296 YEAR|trav_total|bikers_total 1980|94722000 |446400 1990|114747377|461038 2000|128268633|485425 2010|137026072|726197 2016|150439303|861718 real 0m2.212s user 0m1.859s sys 0m0.375s Well, that was super fast!! 25 minutes is now down to 2 seconds. That’s a 750x speedup. Not bad! Let’s take a closer look at what I did there. The q program is attached to tabulate-pq by a UNIX pipe. So, tabulate_pq processes the Parquet and outputs the CSV-formatted cross-tabulation, which is piped to the next program, q. If we select from - in our from clause, q reads from standard input on the command line. Perfect! We have off-loaded the expensive I/O to an efficient format and fast program. Since the output of tabulate_pq is tiny compared to the dataset itself, we can efficiently pipe the output right to q for final processing. Any analytical sum or count query involving three or fewer variables can be done in this way using my tabulate_pq helper script. If I were seriously pursuing this approach, I might extend tabulate_pq to allow many more variables and handle multiple file Parquet datasets. Digging Deeper into our Commuting Data Now that we have a fast way to query the data, it’s easier to iterate over our dataset, asking more interesting questions. We can look at tech workers in more detail without having to run the query over lunch. I’ve added other categories of transportation to the original query to see how tech workers compare to the average worker in every mode of transit: $ time ./tabulate_pq ./usa_00065.parquet PERWT YEAR TRANWORK OCC2010 | \ q -d, -D"|" -b -O -H \ "select sum(TOTAL), case \ when OCC2010/100 = 10 then 'programmer' else 'other' end as programmer, \ case when TRANWORK=0 then 'N/A' \ when TRANWORK=40 then 'biker' \ when TRANWORK/10=3 then 'public' \ when TRANWORK/10 = 1 then 'car' \ when TRANWORK=50 or TRANWORK=70 then 'walk_or_home' \ else 'other' end as travel \ from - \ where YEAR=2016 and TRANWORK>0 and OCC2010 < 9900 \ group by travel,programmer ** Diagnostic output of tabulate_pq ** num_rows: 17634469 file has 8 row groups. Data extract took: 1.238 seconds. Cross tab computation took: 0.836 seconds. ** Output of q ** sum(TOTAL)|programmer|travel 824999 |other |biker 36719 |programmer|biker 125432749|other |car 3020400 |programmer|car 1569042 |other |other 40017 |programmer|other 7493340 |other |public 374745 |programmer|public 11098514 |other |walk_or_home 548778 |programmer|walk_or_home real 0m2.189s user 0m1.984s sys 0m0.219s We could look at the relative rates between tech workers and other workers for each mode of transit to see if tech workers seem to choose certain modes of transportation more often than the working population as a whole. But it’s starting to get hard to make sense of all of these numbers on the command line, so let’s see if we can make this a bit more visual. Add Visualization to the Pipeline with GnuPlot I’d like to start exploring commuting patterns by income level, but the results of this next query would be overwhelming to read in a table. I’ll need to create a bunch of income brackets, too many to easily read in a table. So I’ll graph the data with GnuPlot instead. I’m wondering what leads people to choose human-powered (bike or walk) transport to work. Maybe the higher rate of bike commuters among tech workers is a reflection of their relatively high incomes, or maybe they are outliers and rates of non-car commuting are actually higher at lower incomes because cars cost more money. We’ll soon find out. I’ll ignore work-from-home people (not commuting at all) and public transportation (just to simplify things) for this example. Here I’m gluing a script together with the shell and passing all commands to GnuPlot on the command line. It’s a great approach for exploring data, but once you’ve found the interesting plots you’ll likely want to explore gnuplot some more with a dedicated script. You can control the output format, labels, graph styles and really anything about the graph. The >cat indicates data will come from standard input rather than a file; to plot data from a file instead you’d write something like gnuplot -e "plot 'results.tsv'". $./tabulate_pq usa_00065.parquet PERWT YEAR TRANWORK INCTOT | q -d, -H -O "select ((inctot/25000)*25000 / 1000) || 'K' as income, sum(total) as biker_walkers \ from - \ where YEAR=2016 and tranwork in(40,50) and INCTOT>0 and inctot < 990000 \ group by inctot/25000 \ order by inctot/25000 desc" | gnuplot -e "set terminal png;set style data lines;\ set datafile separator comma;\ plot '< cat' title columnheader linewidth 5" > count_bikers_walkers_by_income.png We pipe the table made by q to gnuplot for graphing. Our command makes this graph: Income in thousands is on the X-axis, numbers of human-powered commuters is on the Y-axis. Pretty much looks like an income distribution; of course the vast majority of anybody doing most things is on the lower income side (by a lot.) We’re not learning much yet. Instead we should compare percent of bikers and walkers within each income bracket. There may be many fewer people biking who earn $150,000, but what’s the ratio of drivers to walkers and bikers at each income bracket? I’m cutting off the income at $400,000; there are too few cases in the sample to get much accuracy at higher income levels. $ time ./tabulate_pq usa_00065.parquet PERWT YEAR TRANWORK INCTOT | q -d, -H -O "select ((inctot/20000) * 20000)/1000 as income_thousands,sum(case when tranwork in (40,50) then TOTAL else 0 end) * 100.0 / sum(TOTAL) as bike_or_walk \_walkers_by_income.png real 0m5.051s user 0m5.031s sys 0m0.766s In this plot the X-axis is income in thousands, the Y-axis is percent of total commuters who bike or walk. Returning to our original inquiry about biking to work – are tech workers special, or is there a higher rate of biking to work at moderately high incomes compared to median income and below? $ ./tabulate_pq usa_00065.parquet PERWT YEAR TRANWORK INCTOT | q -d, -H -O "select ((inctot/20000) * 20000)/1000 as income_thousands,sum(case when tranwork in (40) then TOTAL else 0 end) * 100.0 / sum(TOTAL) as biking \_by_income.png So it appears biking increases a lot as income rises, and it’s interesting to note the two spikes on both graphs of bikers and walkers and bikers. Notably, walking among low income people drops off sharply as income increases, while biking goes in the other direction. Overall, we can see the higher rate of biking among tech workers is in line with everyone else earning similar incomes ($70,000 to $160,000). To be clear, all we’ve seen so far are some interesting corelations; we could include a number of additional variables to tease out connections between occupation, income and lifestyles. The point here is that this sort of tool chain makes it easy to iterate quickly, testing different hypotheses and rapidly analyzing the data from multiple angles as we refine our inquiry. Parquet with Python: PyArrow As promised, it’s time to discuss how we can do the same thing in Python as I showed above in C++. The Python binding to Parquet and Arrow is known as PyArrow. With PyArrow, you can write Python code to interact with Parquet-formatted data, and as an added benefit, quickly convert Parquet data to and from Python’s Pandas dataframes. Pandas came about as a method to manipulate tabular data in Python. It tries to smooth the data import / export process and provide an API for working with spreadsheet data programmatically in Python. It’s essentially an alternate approach to the problem the q utility tries to solve, though Pandas does much more. Chances are if you’re working in Python with a lot of data, you already know of Pandas, but if not, it’s a great resource! However, large datasets have long been problematic with Pandas due to inefficient memory use, and reading large data could be faster than it is with Pandas. That’s where PyArrow enters the picture. PyArrow is based on the “parquet-cpp” library and in fact PyArrow is one of the reasons the “parquet-cpp” project was developed in the first place and has reached its current state of maturity. Let’s test a similar query to the previous example queries, this time using PyArrow and Pandas. First though, we need to install them. The following code will install in Python 2 by default if that’s your system Python; but it will install into Python 3 as well (see the PyArrow documentation, it’s excellent.) $ pip install pyarrow ..... Here’s the Python/PyArrow/Pandas way of reading in the same Parquet data, producing a crosstab, and filtering by the appropriate TRANWORK and OCC2010 values: import pandas as pd import pyarrow as pa import pyarrow.parquet as pq table1 = pq.read_table('usa_00065.parquet', columns=['PERWT','YEAR', 'OCC2010', 'TRANWORK'],nthreads=4) df = table1.to_pandas() filtered = df[(df.YEAR==2016) & (df.TRANWORK>0)] results=pd.crosstab((filtered.OCC2010>=1000) &(filtered.OCC2010<1100) , filtered.TRANWORK==40, filtered.PERWT, aggfunc=sum) # print out or further process resulting cross-tab table print results Here you can see that the pyarrow.parquet library provides a to_pandas() method, which converts the parquet data to a Pandas dataframe. After that, the script is just pure Pandas, which may be familiar to any data scientist used to working in Python. Timing this, we see it takes just a few seconds longer than the C++ version: $ time python prog_bikers.py TRANWORK False True OCC2010 False 145593645 824999 True 3983940 36719 real 0m5.877s user 0m0.590s sys 0m1.120s Not as good as the C++ and q combo, but pretty good. Part of the extra time is due to starting up Python itself. In general, the data reads should be just as fast as with a C++ program. Data read in from Parquet is stored using the Arrow in-memory data storage library mentioned above. Converting to Parquet with Python We have shown how we can tabulate Parquet data with Python, but we haven’t yet discussed how to convert other formats to Parquet with Python. It’s quite easy, though not as efficient as the custom C++ solution make-parquet in terms of memory required. You simply load a CSV into a Pandas data frame, then save the data frame as Parquet. import pandas as pd import pyarrow as pa import pyarrow.parquet as pq df = pd.read_csv('usa_00065.csv') print "Loaded csv" df.to_parquet('usa_00065.from_pyarrow.parquet', flavor='Spark') We’ll call this convert_to_parquet.py. The default behavior of this tiny script will be to load all the CSV data into a data frame and infer the types of each column before saving as Parquet, which is very handy but slightly expensive with respect to time and memory use. Optionally you can read the CSV in by chunks (at the risk of mis-typing some columns with a few edge cases.) See the Pandas user guide for more info about auto type inference. Just to give you a notion of how fast Pandas + PyArrow can be: $ time python convert_to_parquet.py sys:1: DtypeWarning: Columns (58,76) have mixed types. Specify dtype option on import or set low_memory=False. real 3m16.278s user 1m27.734s sys 1m29.703s Notice the column type errors: This is a drawback of auto type inference. You can theoretically scan the entire CSV to get the types correct, but my computer ran out of RAM and swapped. I gave up after half an hour. Never fear though, if you’re serious about importing a particular schema you can specify all the column types explicitly when importing into Pandas and avoid this costly step. If you’re comfortable with Pandas you could use a Pandas + PyArrow Python script as part of a data analysis pipeline, sending the results to GnuPlot as I showed earlier. Finally, when reading Parquet with PyArrow you can use more than one thread to read columns and decompress their data in parallel: table1 = pq.read_table('usa_00065.parquet', nthreads=4, columns=['PERWT','YEAR', 'OCC2010', 'TRANWORK']) For especially large datasets this may provide some additional speedup. Conclusion of Part 2 Thanks for sticking with this post to the end! I know that was a lot. Hopefully you found some useful content in there! Although Parquet was originally developed for use with parallel processing frameworks like Hadoop and Spark, today we’ve seen how this handy columnar storage format can also make single-computer big data computation a lot faster and more scalable with a modest amount of code. The tabulate_pq C++ utility and the Python PyArrow examples above are just proofs of concept. You could make something that supports more variables or adds in more powerful aggregate functions like medians – tabulate_pq only does sum(). And most importantly, these examples are all still just single-threaded programs, aside from the PyArrow multi-threaded read we just showed. Since all modern computers ship with multiple compute cores, multi-threaded approaches can unlock even more of the potential and performance of your little laptop or desktop to work on big data. We’ll explore that concept in Part 3 of this series. Colin Davis Code · Data CSV Parquet C++ Python Spark Data Science
https://tech.popdata.org/big-data-on-a-laptop-tools-and-strategies-part-2/
CC-MAIN-2018-47
refinedweb
4,402
61.36
Getting Started with SQL CLR Part 2 # As you start playing with SQL CLR, you learn pretty quick that the built in namespaces can be a little handicapping. You can skip this by creating an ‘unsafe’ assembly in your database. In the following demo, we’re going to load a 3rd party .library for XMPP messaging into our SQL Server instance, and use a stored procedure to send XMPP messages. The XMPP library I’ll be using is agsXMPP. As you’ll see in my code below, I tend to just use a common shared library folder so I can house various libraries and it makes it easier from an organizational standpoint to cram them into the database. I use the following code to ‘cram’ the library into my instance so I’ll be able to use it in a CLR procedure: CREATE ASSEMBLY [agsXMPP] FROM 'C:SubversionShared LibsagsxmppagsXMPP.dll' WITH PERMISSION_SET = UNSAFE GO Once that is out of the way, we can roll our CLR proc, deploy and message to our hearts content (or until the network drops) using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using agsXMPP; using agsXMPP.protocol.client; // yes, domains and names have been changed to protect the innocent servers. public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static void clr_SendXMPP() { XmppClientConnection xmpp; xmpp = new XmppClientConnection(); xmpp.AutoPresence = true; xmpp.AutoResolveConnectServer = true; xmpp.Port = 5222; xmpp.UseSSL = false; xmpp.Server = "xmpp.datachomp.com"; xmpp.Username = "SQLAlert"; xmpp.Password = "SqlClr"; xmpp.Open(); xmpp.OnLogin += delegate(object o) { xmpp.Send(new Message(new Jid("Gatir@xmpp.datachomp.com"), MessageType.chat, "This really should be a variable yes?")); }; } }; </code>
https://www.datachomp.com/archives/getting-started-with-sql-clr-part-2/
CC-MAIN-2017-22
refinedweb
281
52.76
Back to index import "nsISHEntry.idl"; Definition at line 204 of file nsISHEntry.idl.. Clear the child shell list. Additional ways to create an entry. Saved position and dimensions of the content viewer; we must adjust the root view's widget accordingly if this has changed when the presentation is restored. Attribute that indicates if this entry is for a subframe navigation. Set/Get scrollers' positon in anchored pages. Title for the document. URI for the document. Ensure that the cached presentation members are self-consistent. If either |contentViewer| or |windowState| are null, then all of the following members are cleared/reset: contentViewer, sticky, windowState, viewerBounds, childShells, refreshURIList. attribute to set and get the cache key for the entry Definition at line 150 of file nsISHEntry.idl. attribute to indicate the content-type of the document that this is a session history entry for Definition at line 162 of file nsISHEntry.idl. Content viewer, for fast restoration of presentation. Definition at line 68 of file nsISHEntry.idl. attribute to indicate whether the page is already expired in cache Definition at line 156 of file nsISHEntry.idl. An ID to help identify this entry from others during subframe navigation. Definition at line 137 of file nsISHEntry.idl.. Definition at line 79 of file nsIHistoryEntry.idl. LayoutHistoryState for scroll position and form values. Definition at line 122 of file nsISHEntry.idl. The loadType for this entry. This is typically loadHistory except when reload is pressed, it has the appropriate reload flag Definition at line 131 of file nsISHEntry.idl. Get the owner, if any, that was associated with the channel that the document that was loaded to create this history entry came from. Definition at line 200 of file nsISHEntry.idl. A URI representation of the owner, if that's possible. Definition at line 209 of file nsISHEntry.idl.). Definition at line 147 of file nsISHEntry.idl. parent of this entry Definition at line 125 of file nsISHEntry.idl. Post Data for the document. Definition at line 119 of file nsISHEntry.idl. Referrer URI. Definition at line 65 of file nsISHEntry.idl. Saved refresh URI list for the content viewer. Definition at line 104 of file nsISHEntry.idl. attribute to indicate whether layoutHistoryState should be saved Definition at line 153 of file nsISHEntry.idl. Whether the content viewer is marked "sticky". Definition at line 71 of file nsISHEntry.idl. A readonly property that returns the title of the current entry. The object returned is a encoded string Definition at line 68 of file nsIHistoryEntry.idl. A readonly property that returns the URI of the current entry. The object returned is of type nsIURI Definition at line 61 of file nsIHistoryEntry.idl. Saved state of the global window object. Definition at line 74 of file nsISHEntry.idl.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_s_h_entry___m_o_z_i_l_l_a__1__8___b_r_a_n_c_h2.html
CC-MAIN-2017-51
refinedweb
464
61.22
Learning Resources for Software Engineering Students » Authors: Samson Tan Min Rong, Phang Chun Rong Python is a simple yet powerful and versatile language. Conceived in the late 80s, it is now widely used across many fields of computer science and software engineering. While not as speedy as compiled languages like C or Java, Python's emphasis on readability, and resulting ease of maintenance, often outweighs the advantages conferred by compiled languages. This especially true in applications where execution speed is non-critical. If you're a programmer looking to get in on the Python action, check out Google's Python class, which will introduce you to Python's concepts and core data structures like lists, dictionaries, and strings! For absolute beginners, check out this video by Derek Banas where he covers everything from installing Python and the basics to more advanced concepts like Inheritance and Polymorphism in under an hour! If you'd prefer to read, check out Python Guru which has plenty of code samples to help you along. Both newbies and experienced programmers can also benefit from The Python Tutorial, which aims to introduce readers to Python's unique features and style. Despite obvious similarities in two Python versions, Python 3's intentional backward incompatibility makes choosing to learn Python 3 over Python 2 a tough problem for both new and experienced Python programmers. Some concerns include the lack of popular Python 2 packages in Python 3 as well as changes in some Python built-in libraries that might break existing systems. One example is that a simple print 'Hello World' that runs perfectly in Python 2 will cause a syntax error in Python 3. Read more in this Digital Ocean's post to understand this conundrum of choosing between Python 2 and Python 3 better. If you choose to learn both Python 2 and Python 3, take a look at some of these important changes to avoid gotchas due to version differences. Overall, it is recommended to learn Python 3 as a beginner today as it has been 10 years since Python 3 debuted. This stance is also supported by core members of Python. Moreover, unlike the early days of Python 3's release, many popular packages from Python 2 now support Python 3 as well. Since Python 2 is a legacy language while Python 3 is in active development, it would be better to learn Python 3 today. When starting new projects or hopping onto existing Python repositories, you are recommended to install dependencies using a virtual environment to avoid dependency conflicts. This is a good practice especially when managing dependencies from different projects which may rely on different Python versions and packages. The official Python documentation gives instructions on the standard way of creating a virtual environment - defining a directory location and activating it. However, you can consider using other libraries that can make this process smoother. Some of the most popular ones are: pyenvallows easy management of different Python versions and pyenv-virtualenvallows managing virtualenv associated to the Python versions managed by pyenv. Strings, lists, and dictionaries belong to a type of class known as Iterables. An Iterable is defined by Python to be "an object capable of returning its members one at a time". Due to their versatility you'll often find that strings, lists, and dictionaries are all you'll ever need. However, there may come a time when you'll need to create your own Iterable data structure. If that's the case, you may want to delve into the inner workings of Iterable classes, check out Iterables, Iterators and Generators by Ian Ward. He begins by explaining the Iterable class, then goes into the Iterator and Generator classes, both of which are powerful tools in Python. Accessing an element in a list using its index is known as indexing, e.g. my_list[0] returns the first element of my_list (Iterables are zero-indexed—their first index is 0). Slicing, on the other hand, allows us to access a range of elements in a list. Extended slicing extends this functionality by introducing a step parameter, enabling operations like accessing every other element. Read more about this in a blog post on indexing and slicing in Python! There, I also explain the basics of list comprehension, a type of syntactic sugar for transforming lists. Trey Hunner gives an in-depth explanation here using the for-loops we all know and love. While Python is really useful for cranking out small scripts, we often want to use it to build fully fledged applications too. When this happens, programming procedurally results in major readability and maintenance issues—especially when your code starts breaking and you're trying to figure out why! This is where Object Oriented Programming comes in, offering a way to think about your program and organizing it into readable chunks. Jeff Knupp gives an introduction to Python Classes and OOP here, introducing concepts like static vs instance attributes, inheritance, abstract classes, and the Liskov Substitution Principle. In addition to scripting and OOP, Python also supports functional programming, a completely different paradigm from OOP and procedural programming. Where procedural programming and OOP makes heavy use of states, functional programming eschews them completely. In this paradigm, the result of a function is wholly dependent on its arguments. "Why would I want to relearn how to code?!", you may ask. Big Data's increasing relevance has thrown the spotlight on distributed systems and concurrency techniques. No prizes for guessing which programming paradigm is most amenable to the implementation of these systems. Here is a great introduction to the tools at the heart of functional programming in Python 3: map, filter, reduce, and lambda. An alternative way of doing functional programming in Python using list comprehensions to replace the map, filter, and reduce functions. Here is a comparison of both methods, complete with examples! Data Science seems to be all the rage recently, so you'll be glad to know that Python has become a major player in the field of Data Science, rivalled mainly by R. Some commonly used Python libraries in Data Science include pandas for data management and manipulation, numpy and scipy for scientific computing, scikit-learn for general machine learning and matplotlib for data visualization. To get started, take a look at: Like most other languages, Python has its own set of common gotchas that can really frustrate newbie Python programmers due to the unintended bugs. Let's consider this common pitfall that many Python programmers encounter. def append_to(element, to=[]): to.append(element) return to my_list = append_to(12) print(my_list) # [12] my_other_list = append_to(42) print(my_other_list) # [12,42] Looking at the above example, one might think that my_other_list will be [42] but actually is [12,42]. The reason is because Python's default arguments, in this case to = [], are evaluated only once when the function is defined. Learning how to avoid such pitfalls is one huge step towards being a productive Python programmer. Here are some other guides that state some common gotchas and how to avoid them:
https://se-education.org/learningresources/contents/python/introduction-to-python.html
CC-MAIN-2019-26
refinedweb
1,178
51.48
At 09/20/2012 01:16 PM, Minchan Kim Wrote:> On Thu, Sep 20, 2012 at 11:12:40AM +0800, Wen Congyang wrote:>> At 09/20/2012 10:30 AM, Minchan Kim Wrote:>>> On Thu, Sep 20, 2012 at 09:17:43AM +0800, Wen Congyang wrote:>>>>>>>>>> I would like to clarify your word.>>> Create or recreate?>>> Why I have a question is as I look over the source code, hotadd_new_pgdat>>> seem to be called if we do hotadd *new* memory. It's not the case for>>> offline and online again you mentioned. If you're right, I should find >>> arch_free_nodedata to free pgdat when node is disappear but I can't find it.>>> Do I miss something?>>>> Hmm, when a memory is removed, we don't do cleanup now. We(Fujitsu) posted>> a patchset to do this:>>>>>> We don't free pgdat in this patchset now. We have two choice:>> 1. free pgdat>> 2. don't free it, and reuse it when it is onlined again>>>> I'm not sure which choice is better.> > I have no idea because I don't know how you guys uses.> If there is use case that sometime you ues many node burstly but> ues a few node in most time, 1) would be good POV memory efficiency> although it makes code rather complicated.> > Anyway, it's another story with this patch because it's not merged yet.> >>>>>>>>>>>> function hotadd_new_pgdat(), and we will lost the statistics stored in>>>> zone->pageset. So we should drain it in offline path.>>>>>> Even we drain in offline patch, it still has a problem.>>>>>> 1. offline>>> 2. drain -> OKAY >>> 3. schedule>>> 4. Process A increase zone stat>>> 5. Process B increase zone stat>>> 6. online>>> 7. reset it -> we ends up lost zone stat counter which is modified between 2-6>>>>>>> I understand why you drain it in online path now. But it still should drain it>> in offline path because if all pages in this zone are offlined, we will call>> zone_pcp_reset() to reset zone's pcp. We should also drop it in the function>> zone_pcp_reset().> > Good point.> How about this?> >>From e92bf3e96720c89cb18ec32c5db095a27ad4133c Mon Sep 17 00:00:00 2001> From: Minchan Kim <minchan@kernel.org>> Date: Thu, 20 Sep 2012 14:11:49 +0900> Subject: [PATCH v2] memory-hotplug: fix zone stat mismatch> > During memory-hotplug, I found NR_ISOLATED_[ANON|FILE]> are increasing so that kernel are hang out.> > The cause is that when we do memory-hotadd after memory-remove,> __zone_pcp_update clear out zone's ZONE_STAT_ITEMS in setup_pageset> although vm_stat_diff of all CPU still have value.> > In addtion, when we offline all pages of the zone, we reset them> in zone_pcp_reset without drain so that we lost zone stat item.> > This patch fixes it.> > * from v1> * drain offline patch - KOSAKI, Wen> > Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>> Cc: Wen Congyang <wency@cn.fujitsu.com>> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>> Signed-off-by: Minchan Kim <minchan@kernel.org>> ---> include/linux/vmstat.h | 4 ++++> mm/page_alloc.c | 7 +++++++> mm/vmstat.c | 12 ++++++++++++> 3 files changed, 23 insertions(+)> > diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h> index ad2cfd5..5d31876 100644> --- a/include/linux/vmstat.h> +++ b/include/linux/vmstat.h> @@ -198,6 +198,8 @@ extern void __dec_zone_state(struct zone *, enum zone_stat_item);> void refresh_cpu_vm_stats(int);> void refresh_zone_stat_thresholds(void);> > +void drain_zonestat(struct zone *zone, struct per_cpu_pageset *);> +> int calculate_pressure_threshold(struct zone *zone);> int calculate_normal_threshold(struct zone *zone);> void set_pgdat_percpu_threshold(pg_data_t *pgdat,> @@ -251,6 +253,8 @@ static inline void __dec_zone_page_state(struct page *page,> static inline void refresh_cpu_vm_stats(int cpu) { }> static inline void refresh_zone_stat_thresholds(void) { }> > +static inline void drain_zonestat(struct zone *zone,> + struct per_cpu_pageset *pset) { }> #endif /* CONFIG_SMP */> > extern const char * const vmstat_text[];> diff --git a/mm/page_alloc.c b/mm/page_alloc.c> index ab58346..980f2e7 100644> --- a/mm/page_alloc.c> +++ b/mm/page_alloc.c> @@ -5904,6 +5904,7 @@ static int __meminit __zone_pcp_update(void *data)> local_irq_save(flags);> if (pcp->count > 0)> free_pcppages_bulk(zone, pcp->count, pcp);> + drain_zonestat(zone, pset);> setup_pageset(pset, batch);> local_irq_restore(flags);> }> @@ -5920,10 +5921,16 @@ void __meminit zone_pcp_update(struct zone *zone)> void zone_pcp_reset(struct zone *zone)> {> unsigned long flags;> + int cpu;> + struct per_cpu_pageset *pset;> > /* avoid races with drain_pages() */> local_irq_save(flags);> if (zone->pageset != &boot_pageset) {> + for_each_online_cpu(cpu) {A cpu can be offlined before the pages in the zone are offlined. SoI think you should drain it on all possible cpu, not online cpu.ThanksWen Congyang> + pset = per_cpu_ptr(zone->pageset, cpu);> + drain_zonestat(zone, pset);> + }> free_percpu(zone->pageset);> zone->pageset = &boot_pageset;> }> diff --git a/mm/vmstat.c b/mm/vmstat.c> index b3e3b9d..d4cc1c2 100644> --- a/mm/vmstat.c> +++ b/mm/vmstat.c> @@ -495,6 +495,18 @@ void refresh_cpu_vm_stats(int cpu)> atomic_long_add(global_diff[i], &vm_stat[i]);> }> > +void drain_zonestat(struct zone *zone, struct per_cpu_pageset *pset)> +{> + int i;> +> + for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)> + if (pset->vm_stat_diff[i]) {> + int v = pset->vm_stat_diff[i];> + pset->vm_stat_diff[i] = 0;> + atomic_long_add(v, &zone->vm_stat[i]);> + atomic_long_add(v, &vm_stat[i]);> + }> +}> #endif> > #ifdef CONFIG_NUMA
http://lkml.org/lkml/2012/9/20/9
CC-MAIN-2015-22
refinedweb
829
57.67
Now let's move on to some simple math programs. Let's start by writing a program that converts a person's weight on earth to his or her weight on the moon. An object that weighs one pound on Earth would weigh 0.1654 pounds on the moon, according to Wikipedia. Therefore, if we know the weight of something on Earth, we can multiply that weight by 0.1654 to get its weight on the moon. In C++, we represent multiplication with the * symbol. In order to write this program, we will need to ask the user of our program for his or her weight, store that information in the computer, do the conversion, and then display the moon weight back to the user. The easiest way to do this is to save all the numbers in the program as variables. Just like in math class, a variable is a letter or a word that stands for a number or a mathematical formula. For example, we can say x = 4; y = 5 * x; In C++ , we are required to define the type of our variables. For example, if we know that a variable will store a decimal point number, we declare it to be a float, which stands for floating (decimal) point number. We always declare our variables inside a function before writing any other code (so for the main function, it would be the first line under int main(){). We do this by saying the variable type and then the names of all the variables that are that type, followed by a semi-colon. For example, we would say float earthWeight for the variable earthWeight described above. For our program, all our variables will be floats, since they are all decimal point numbers. Here's a quick reference of types in C++: Let's write out an algorithm for this program, so that we make sure the computer has all the information it needs to do the program. cin >> myVariable; The last thing we have to address is how to print out the person's weight on the moon after we do the conversion. We can use cout again and simply surround the variable with double-left angle brackets, like this: cout << "You would weigh "<< moonWeight << " lbs on the moon.\n"; Now we're ready to write the program. The text preceded by "//" is a comment---just notes to anyone reading the code. Comments are ignored by the computer and are just used by programmers to explain parts of their code. It's good practice to have short comments in your code. #include <iostream> using namespace std; int main(){ float conversionFactor, earthWeight, moonWeight; //define variables as floats cout << "Enter your weight on earth:"; //prompt the user to enter weight cin >> earthWeight; //store what the user types as earthWeight conversionFactor = 0.1654; moonWeight = earthWeight*conversionFactor; cout << "You would weigh " << moonWeight << " lbs on the moon.\n"; //print out conversion } There are lots of math operators available in C++. The most common are the following: The % operator gives you the remainder of integer division. So if you want the remainder of 5/2 (which is 1), you would say: rem = 5%2; Write a program that reverses the program above. This program will take the user's weight on the moon and convert it to Earth weight. Hint: What's the mathematical opposite of multiplication? Write a program that asks the user for two numbers, and then prints out the sum of the two numbers. Hint: Use multiple cin commands. Next, you'll learn about functions.
http://www.cs.utexas.edu/~ans/firstbytes/tutorial/math_cpp.html
CC-MAIN-2014-23
refinedweb
595
70.73
On Sat, Nov 21, 2009 at 06:20:40PM -0800, zaxis wrote: > newtype X a = X (ReaderT XConf (StateT XState IO) a) > #ifndef __HADDOCK__ > deriving (Functor, Monad, MonadIO, MonadState XState, MonadReader XConf, > Typeable) > #endif > > In `X (ReaderT XConf (StateT XState IO) a)`, X is a type constructor, how to > understand `(ReaderT XConf (StateT XState IO) a)` ? Well, “ReaderT XConf (StateT XState IO) a” is *the* type :). It's a monad that is a Reader of XConf and has a State of XState. This means you can use, for example, ask :: X XConf and get :: X XState > And why use `#ifndef __HADDOCK__` ? Because Haddock used to have difficulties in processing some directives, like that “deriving (..., MonadState XState, ...)” which is part of the GeneralizedNewtypeDeriving extension. HTH, -- Felipe.
http://www.haskell.org/pipermail/haskell-cafe/2009-November/069546.html
CC-MAIN-2013-48
refinedweb
125
59.33
#include <SymbolMap.h> Collaboration diagram for SymbolMap: It's very small and fast if most/all of the Symbols are being mapped (but it would be poor for a very sparse mapping). A mapping to Symbol::NULL is the same as no mapping. I can imagine a cluster of SymbolMaps which would NOT have the "one SymbolTable" restriction, but would have a vector for each SymbolTable used. To avoid hashtable overhead, that cluster would probably use the SymbolTableNumbers, so it would grow with the number of SymbolTables in the process. Alternatively, you could easily support a small number of SymbolTables with linear-time access, which is decent for the normal case of one or two symbol tables. Maybe a SymbolMap2. Definition at line 28 of file SymbolMap.h.
http://www.w3.org/2001/06/blindfold/api/classSymbolMap.html
CC-MAIN-2016-36
refinedweb
128
62.27
%matplotlib inline import numpy as np import matplotlib.pyplot as plt Data is often stored in CSV files (Comma Separated Values, although the values can be separated by other things than commas). So far, we have loaded csv files with the np.loadtxt command. The loadtxt function has some basic functionality and works just fine, but when we have more elaborate data sets we want more sophisticated functionality. The most powerful and advanced package for data handling and analysis is called pandas, and is commonly imported as pd: import pandas as pd We will use only a few functions of the pandas package here. Full information on pandas can be found on the pandas website. Consider the following dataset, which is stored in the file transport.csv. It shows the percentage of transportation kilometers by car, bus or rail for four countries. The dataset has four columns. country, car, bus, rail some more explanations, yada yada yada France, 86.1, 5.3, 8.6 Germany, 85.2, 7.1, 7.7 Netherlands, 86.4, 4.6, 9 United Kingdom, 88.2, 6.5, 5.3 This data file can be loaded with the read_csv function of the pandas package. The read_csv function has many options. We will use three of them here. The rows that need to be skipped are defined with the skiprows keyword (in this case row 1 with the yada yada text). The skipinitialspace keyword is set to True so that the column name ' car' is loaded without the initial space that is in the data file. And the index_col keyword is set to indicate that the names in column 0 can be used as an index to select a row. tran = pd.read_csv('transport.csv', skiprows=[1], skipinitialspace=True, index_col=0) pandas loads data into a DataFrame. A DataFrame is like an array, but has many additional features for data analysis. For starters, once you have loaded the data, you can print it to the screen print(tran) car bus rail country France 86.1 5.3 8.6 Germany 85.2 7.1 7.7 Netherlands 86.4 4.6 9.0 United Kingdom 88.2 6.5 5.3 When the DataFrame is large, you can still print it to the screen ( pandas is smart enough not to show the entire DataFrame when it is very large), or you can simply print the first 5 lines of the DataFrame with the .head() function. A better option is the display function to display a nicely formatted DataFrame to the screen. display(tran) print('Names of columns:') print(tran.keys()) for key in tran.keys(): print(key) Names of columns: Index(['car', 'bus', 'rail'], dtype='object') car bus rail Each DataFrame may be indexed just like an array, by specifying the row and column number using the .iloc syntax (which stands for index location), where column 0 is the column labeled car (the column labeled as country was stored as an index when reading the csv file). print(tran.iloc[0, 1]) # gives the bus data for France print(tran.iloc[1, 0]) # gives the car data for Germany print(tran.iloc[2, 2]) # gives the rail data for Netherlands print(tran.iloc[3]) # all data for United Kindom print(tran.iloc[:, 1]) # all data for bus 5.3 85.2 9.0 car 88.2 bus 6.5 rail 5.3 Name: United Kingdom, dtype: float64 country France 5.3 Germany 7.1 Netherlands 4.6 United Kingdom 6.5 Name: bus, dtype: float64 Alternatively, and often more explicit, values in a DataFrame may be selected by specifying the indices by name, using the .loc syntax. This is a bit more typing but it is much more clearly what you are doing. The equivalent of the code cell above, but using indices by name is print(tran.loc['France', 'bus']) print(tran.loc['Germany', 'car']) print(tran.loc['Netherlands', 'rail']) print(tran.loc['United Kingdom']) print(tran.loc[:, 'bus']) 5.3 85.2 9.0 car 88.2 bus 6.5 rail 5.3 Name: United Kingdom, dtype: float64 country France 5.3 Germany 7.1 Netherlands 4.6 United Kingdom 6.5 Name: bus, dtype: float64 There are two alternative ways to access all the data in a column. First, you can simply specify the column name as an index, without having to use the .loc syntax. Second, the dot syntax may be used by typing .column_name, where column_name is the name of the column. Hence, the following three are equivalent print(tran.loc[:, 'car']) # all rows of 'car' column print(tran['car']) # 'car' column print(tran.car) If you want to access the data in a row, only the .loc notation works tran.loc['France'] car 86.1 bus 5.3 rail 8.6 Name: France, dtype: float64 numpyfunctions for DataFrames¶ DataFrame objects can often be treated as arrays, especially when they contain data. Most numpy functions work on DataFrame objects, but they can also be accessed with the dot syntax, like dataframe_name.function(). Simply type tran. in a code cell and then hit the [tab] key to see all the functions that are available (there are many). In the code cell below, we compute the maximum value of transportation by car, the country corresponding to the maximum value of transportation by car (in pandas this is idxmax rather than the argmax used in numpy), and the mean value of all transportation by car. print('maximum car travel percentage:', tran.car.max()) print('country with maximum car travel percentage:', tran.car.idxmax()) print('mean car travel percentage:', tran.car.mean()) maximum car travel percentage: 88.2 country with maximum car travel percentage: United Kingdom mean car travel percentage: 86.47500000000001 You can also find all values larger than a specified value, just like for arrays. print('all rail travel above 8 percent:') print(tran.rail[tran.rail > 8]) all rail travel above 8 percent: country France 8.6 Netherlands 9.0 Name: rail, dtype: float64 The code above identified France and Netherlands as the countries with more than 8% transport by rail, but the code returned a series with the country names and the value in the rail column. If you only want the names of the countries, you need to ask for the values of the index column print(tran.index[tran.rail > 8].values) ['France' 'Netherlands'] The file annual_precip.csv contains the average yearly rainfall and total land area for all the countries in the world (well, there are some missing values); the data is available on the website of the world bank. Open the data file to see what it looks like (just click on it in the Files tab on the Jupyter Dashboard). Load the data with the read_csv function of pandas, making sure that the names of the countries can be used to select a row, and perform the following tasks: DataFrameto the screen with the .head()function. DataFrame¶ A column may be added to a DataFrame by simply specifying the name and values of the new column using the syntax DataFrame['newcolumn']=something. For example, let's add a column named public_transport, which is the sum of the bus and rail columns, and then find the country with the largest percentage of public transport tran['public_transport'] = tran.bus + tran.rail print('Country with largest percentage public transport:', tran.public_transport.idxmax()) Country with largest percentage public transport: Germany You can plot the column or row of a DataFrame with matplotlib functions, as we have done in previous Notebooks, but pandas has also implemented its own, much more convenient, plotting functions (still based on matplotlib in the background, of course). The plotting capabilities of pandas use the dot syntax, like dataframe.plot(). All columns can be plotted simultaneously (note that the names appear on the axes and the legend is added automatically!). tran.plot(); # plot all columns You can also plot one column at a time. The style of the plot may be specified with the kind keyword (the default is 'line'). Check out tran.plot? for more options. tran['bus'].plot(kind='bar'); DataFrames may be sorted with the .sort_values function. The keyword inplace=True replaces the values in the DataFrame with the new sorted values (when inplace=False a new DataFrame is returned, which you can store in a separate variable so that you have two datasets, one sorted and one unsorted). The sort_values function has several keyword arguments, including by which is either the name of one column to sort by or a list of columns so that data is sorted by the first column in the list and when values are equal they are sorted by the next column in the list. Another keyword is ascending, which you can use to specify whether to sort in ascending order ( ascending=True, which is the default), or descending order ( ascending=False) print('Data sorted by car use:') display(tran.sort_values(by='car')) print('Data sorted by bus use:') display(tran.sort_values(by='bus')) Data sorted by car use: Data sorted by bus use: Sometimes (quite often, really), the names of columns in a dataset are not very convenient (long, including spaces, etc.). For the example of the transportation data, the columns have convenient names, but let's change them for demonstration purposes. You can rename columns inplace, and you can change as many columns as you want. The old and new names are specified with a Python dictionary. A dictionary is a very useful data type. It is specified between braces {}, and links a word in the dictionary to a value. The value can be anything. You can then use the word in the dictionary as the index, just like you would look up a word in an paper dictionary. firstdictionary = {'goals': 20, 'city': 'Delft'} print(firstdictionary['goals']) print(firstdictionary['city']) 20 Delft tran.rename(columns={'bus': 'BUS', 'rail': 'train'}, inplace=True) display(tran) The index column, with the countries, is now called 'country', but we can rename that too, for example to 'somewhere in Europe', with the following syntax tran.index.names = ['somewhere in Europe'] display(tran) Continue with the average yearly rainfall and total land area for all the countries in the world and perform the following tasks: ilocsyntax. In time series data, one of the columns represents dates, sometimes including times, together referred to as datetimes. pandas can be used to read csv files where one of the columns includes datetime data. You need to tell pandas which column contains datetime values and pandas will try to convert that column to datetime objects. Datetime objects are very convenient as specifics of the datetime object may be assessed with the dot syntax: .year returns the year, .month returns the month, etc. For example, consider the following data stored in the file timeseries1.dat date, conc 2014-04-01, 0.19 2014-04-02, 0.23 2014-04-03, 0.32 2014-04-04, 0.29 The file may be read with read_csv using the keyword parse_dates=[0] so that column number 0 is converted to datetimes data = pd.read_csv('timeseries1.dat', parse_dates=[0], skipinitialspace=True) display(data) The rows of the DataFrame data are numbered, as we have not told pandas what column to use as the index of the rows (we will do that later). The first column of the DataFrame data has datetime values. We can access, for example, the year, month, and day with the dot syntax print('datetime of row 0:', data.iloc[0, 0]) print('year of row 0:', data.iloc[0, 0].year) print('month of row 0:', data.iloc[0, 0].month) print('day of row 0:', data.iloc[0, 0].day) datetime of row 0: 2014-04-01 00:00:00 year of row 0: 2014 month of row 0: 4 day of row 0: 1 You can get part of the date from an entire column (so for all rows) using the .dt syntax data.date.dt.day # day for entire date column 0 1 1 2 2 3 3 4 4 5 Name: date, dtype: int64 Time series data may also contain the time in addition to the date. For example, the data of the file timeseries2.dat, shown below, contains the day and time. You can access the hour or minutes, but also the time of a row of the DataFrame with the .time() function. date, conc 2014-04-01 12:00:00, 0.19 2014-04-01 13:00:00, 0.20 2014-04-01 14:00:00, 0.23 2014-04-01 15:00:00, 0.21 data2 = pd.read_csv('timeseries2.dat', parse_dates=[0], skipinitialspace=True) display(data2) print('hour of row 0:', data2.iloc[0, 0].hour) print('minute of row 0:', data2.iloc[0, 0].minute) print('time of row 0:', data2.iloc[0, 0].time()) hour of row 0: 12 minute of row 0: 0 time of row 0: 12:00:00 data2.loc[data2.conc>0.2, 'conc'] = 0.2 display(data2) Rainfall data for the Netherlands may be obtained from the website of the Royal Dutch Meteorological Society KNMI . Daily rainfall for the weather station Rotterdam in 2012 is stored in the file rotterdam_rainfall_2012.txt. First open the file in a text editor to see what the file looks like. At the top of the file, an explanation is given of the data in the file. Read this. Load the data file with the read_csv function of pandas. Use the keyword skiprows to skip all rows except for the row with the names of the columns. Use the keyword parse_dates to give either the name or number of the column that needs to be converted to a datetime. Don't forget the skipinitialspace keyword, else the names of the columns may start with a bunch of spaces. Perform the following tasks: plotfunction of pandasto create a line plot of the daily rainfall with the number of the day (so not the date) along the horizontal axis. matplotlibfunctions to add labels to the axes and set the limits along the horizontal axis from 0 to 365. In this exercise we are going to compute the total monthly rainfall for 2012 in the City of Rotterdam using the daily rainfall measurements we loaded in the previous Exercise. Later on in this Notebook we learn convenient functions from pandas to do this, but here we are going to do this with a loop. Create an array of 12 zeros to store the monthly totals and loop through all the days in 2012 to compute the total rainfall for each month. The month associated with each row of the DataFrame may be obtained with the .month syntax, as shown above. Print the monthly totals (in mm/month) to the screen and create a bar graph of the total monthly rainfall (in mm/month) vs. the month using the plt.bar function of matplotlib. data = pd.read_csv('timeseries1.dat', parse_dates=[0], index_col=0) display(data) print('data on April 1:', data.loc['2014-04-01']) print('data on April 2:', data.loc['2014-04-02']) data on April 1: conc 0.19 Name: 2014-04-01 00:00:00, dtype: float64 data on April 2: conc 0.23 Name: 2014-04-02 00:00:00, dtype: float64 DataFrames have a very powerful feature called resampling. Downsampling refers to going from high frequency to low frequency. For example, going from daily data to monthly data. Upsampling refers to going from low frequency to high frequency. For example going from monthly data to daily data. For both upsampling and downsampling, you need to tell pandas how to perform the resampling. Here we discuss downsampling, where we compute monthly totals from daily values. First we load the daily rainfall in Rotterdam in 2012 from the file rotterdam_rainfall_2012.txt and specify the dates as the index (this is the column labeled as YYYYMMDD). We resample the rain to monthly totals using the resample function. You have to tell the resample function to what frequency it needs to resample. Common ones are 'A' for yearly, 'M' for monthly, 'W' for weekly, 'D' for daily, and 'H' for hourly, but there are many other ones (see here). The keyword argument kind is used to tell pandas where to assign the computed values to. You can assign the computed value to the last day of the period, or the first day, or to the entire period (in this case the entire month). The latter is done by specifying kind='period', which is what we will do here. Finally, you need to specify how to resample. This is done by adding a numpy function at the end of the resample statement, like dataframe.resample(...).npfunc() where npfunc can be any numpy function like mean for the mean (that is the default), sum for the total, min, max, etc. Calculating the monthly totals and making a bar graph can now be done with pandas as follows. rain = pd.read_csv('rotterdam_rainfall_2012.txt', skiprows=9, parse_dates=['YYYYMMDD'], index_col='YYYYMMDD', skipinitialspace=True) rain.RH[rain.RH<0] = 0 # remove negative values rain.RH = rain.RH * 0.1 # convert to mm/day monthlyrain = rain.RH.resample('M', kind='period').sum() display(monthlyrain) monthlyrain.plot(kind='bar') plt.ylabel('mm/month') plt.xlabel('month'); YYYYMMDD 2012-01 83.0 2012-02 24.3 2012-03 21.9 2012-04 57.6 2012-05 76.5 2012-06 119.0 2012-07 121.6 2012-08 93.4 2012-09 52.0 2012-10 132.6 2012-11 63.3 2012-12 149.5 Freq: M, Name: RH, dtype: float64 The file rotterdam_weather_2000_2010.txt contains daily weather data at the weather station Rotterdam for the period 2000-2010 (again from the KNMI). Open the data file in an editor to see what is in it. Perform the following tasks: plotfunction of pandas. Plot the mean temperature on the secondary $y$-axis (use the help function to find out how). The resample method resamples to, for example, weeks, one week at a time. The rolling method performs a similar computation for a moving window, where the first argument is the length of the moving window. For example, a 30 day rolling total rainfall first computes the total rainfall in the first 30 days, from day 0 till day 30, then from day 1 till day 31, from day 2 till 32, etc. The value can be assigned to the end of the rolling period, or to the center of the rolling period (by setting center=True). The monthly total rainfall and 30-day rolling total are compared in the figure below. plt.figure(figsize=(12, 4)) plt.subplot(121) monthlyrain.plot(kind='bar') plt.xlabel('2012') plt.ylabel('total monthly rainfall (mm)') plt.subplot(122) rain.RH.rolling(30, center=True).sum().plot() plt.xlabel('2012') plt.ylabel('total 30-day rainfall (mm)'); rain = pd.read_csv('annual_precip.csv', skiprows=2, index_col=0) # print('First five lines of rain dataset:') display(rain.head()) # print() print('Average annual rainfall in Panama is',rain.loc['Panama','precip'],'mm/year') # print() print('Land area of the Netherlands is', rain.loc['Netherlands','area'], 'thousand km^2') # print() print('Countries where average rainfall is below 200 mm/year') display(rain[ rain.precip < 200 ]) # print() print('Countries where average rainfall is above 2500 mm/year') display(rain[ rain.precip > 2500 ]) # print() print('Countries with almost the same rainfall as Netherlands') display(rain[abs(rain.loc['Netherlands','precip'] - rain.precip) < 50]) First five lines of rain dataset: Average annual rainfall in Panama is 2692.0 mm/year Land area of the Netherlands is 33.7 thousand km^2 Countries where average rainfall is below 200 mm/year Countries where average rainfall is above 2500 mm/year Countries with almost the same rainfall as Netherlands rain['totalq'] = rain.precip * rain.area * 1e-3 # print('Five countries with largest annual influx:') rain.sort_values(by='totalq', ascending=False, inplace=True) display(rain[:5]) # rain.totalq[:10].plot(kind='bar'); Five countries with largest annual influx: rain = pd.read_csv('rotterdam_rainfall_2012.txt', skiprows=9, parse_dates=['YYYYMMDD'], skipinitialspace=True) # convert to mm/d rain.iloc[:,2] = rain.iloc[:,2] * 0.1 # set negative values to zero rain.loc[rain.RH < 0, 'RH'] = 0 rain.RH.plot() plt.xlabel('day') plt.ylabel('daily rainfall (mm/day)') plt.xlim(0, 365) print('Maximum daily rainfall', rain.RH.max()) print('Date of maximum daily rainfall', rain.YYYYMMDD[rain.RH.idxmax()]) Maximum daily rainfall 22.400000000000002 Date of maximum daily rainfall 2012-12-22 00:00:00 monthlyrain = np.zeros(12) for i in range(len(rain)): month = rain.iloc[i,1].month monthlyrain[month - 1] += rain.iloc[i, 2] print(monthlyrain) # plt.bar(np.arange(12), monthlyrain, width=0.8) plt.xlabel('month') plt.ylabel('monthly rainfall (mm/month)') plt.xticks(np.arange(12), ['J', 'F', 'M', 'A', 'M', 'J', 'J', 'A', 'S', 'O', 'N', 'D']); [ 83. 24.3 21.9 57.6 76.5 119. 121.6 93.4 52. 132.6 63.3 149.5] weather = pd.read_csv('rotterdam_weather_2000_2010.txt', skiprows=11, parse_dates=['YYYYMMDD'], index_col='YYYYMMDD', skipinitialspace=True) weather.TG = 0.1 * weather.TG weather.RH = 0.1 * weather.RH weather.EV24 = 0.1 * weather.EV24 weather.loc[weather.RH < 0, 'RH'] = 0 yearly_rain = weather.RH.resample('A', kind='period').sum() yearly_evap = weather.EV24.resample('A', kind='period').sum() yearly_temp = weather.TG.resample('A', kind='period').mean() ax1 = yearly_rain.plot() ax1 = yearly_evap.plot() plt.ylabel('Rain/evap (mm/year)') ax2 = yearly_temp.plot(secondary_y=True) plt.xlabel('Year') plt.ylabel('Mean yearly temperature (deg C)') plt.legend(ax1.get_lines() + ax2.get_lines(), ['rain', 'evap', 'temp'], loc='best');
https://nbviewer.jupyter.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook8_pandas/py_exploratory_comp_8_sol.ipynb
CC-MAIN-2021-17
refinedweb
3,612
67.76
sopen() Open a file for shared access Synopsis: #include <unistd.h> #include <fcntl.h> #include <sys/stat.h> #include <sys/types.h> #include <share.h> int sopen( const char* filename, int oflag, int share, ... ); Arguments: - filename - The path name of the file that you want to open. - oflag - Flags that specify the status and access modes of the file. This argument is a combination of the following bits (defined in <fcntl.h>): -_CREAT — create the file if it doesn't exist. This bit has no effect if the file already exists. - O_TRUNC — truncate the file to contain no data if the file exists; this bit has no effect if the file doesn't exist. - O_EXCL — open the file for exclusive access. If the file exists and you also specify O_CREAT, the open fails (that is, use O_EXCL to ensure that the file doesn't already exist). - The shared access for the file. This is a combination of the following bits (defined in <share.h>): - SH_COMPAT — set compatibility mode. - SH_DENYRW — prevent read or write access to the file. - SH_DENYWR — prevent write access to the file. - SH_DENYRD — prevent read access to the file. - SH_DENYNO — permit both read and write access to the file. If you set O_CREAT in oflag, you must also specify the following argument: - mode - An object of type mode_t that specifies the access mode that you want to use for a newly created file. For more information, see Access permissions in the documentation for stat(). Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:, ... ); Returns: A descriptor for the file, or -1 if an error occurs while opening the file BUSY - Sharing mode (share) was denied due to a conflicting open. - EEXIST - The O_CREAT and O_EXCL flags are set, and the named file exists. - EISDIR - The named file is a directory, and the oflag argument specifies write-only or read/write access. - ELOOP - Too many levels of symbolic links or prefixes. - EMFILE - No more descriptors available (too many open files). - ENOENT - Path or file not found. - ENOSYS - The sopen() function isn't implemented for the filesystem specified in path. Examples: ; }
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/sopen.html
CC-MAIN-2020-45
refinedweb
363
68.87
view raw What are the differences between these two code fragments? Which way is considered to be more pythonic? Using type() import types if type(a) is types.DictType: do_something() if type(b) in types.StringTypes: do_something_else() isinstance() if isinstance(a, dict): do_something() if isinstance(b, str) or isinstance(b, unicode): do_something_else()). good, mind you—it's just less bad than checking equality of types. The normal, Pythonic, preferred solution is almost invariably "duck typing": try using the argument as if it was of a certain desired type, do it in a try/ except statement catching all exceptions that could arise if the argument was not in fact of that type (or any other type nicely duck-mimicking it;-), and in the except clause, try something else (using the argument "as if" it was of some other type). basestring: return treatasscalar(x) (see here). (see here.
https://codedump.io/share/lkMUsgwqKuQc/1/differences-between-isinstance-and-type-in-python
CC-MAIN-2017-22
refinedweb
146
62.78
In this article, we’ll take a look at using the fseek() function in C/C++. fseek() is a very useful function to traverse through a file. We can ‘seek’ to different locations, by moving the file pointer. This enables us to control where we can read and write to and from files. Let’s take a look at using this functions, using some illustrative examples! Table of Contents Basic Syntax of fseek() in C/C++ The fseek() function will move the file pointer to a file, based on the option that we give it. This function is present in the <stdio.h> header file. The prototype of the function is as follows: #include <stdio.h> int fseek(FILE* fp, int offset, int position); Usually, if we are moving the pointer, we need to specify the starting position ( offset) from which it will move! There are three options for choosing position, from where you can use offset to shift the pointer. Here, position can take the following macro values: - SEEK_SET -> We place the initial position at the start of the file, and shift from there. - SEEK_CUR -> The initial position is taken at the current position of the existing file pointer. - SEEK_END -> We place the initial position at the end of the file. If you shift the pointer from this position, you will reach EOF. If the function executes successfully, it will return 0. Otherwise, it will return a non-zero value. NOTE: In case of SEEK_END, the offset position is measured backwards, so we’ll be moving from the end of the file! For example, if you try to seek to a position which doesn’t exist, it will fail! Now that we’ve covered the basic syntax, let’s look at some examples now, using fseek(). For the entire demonstration, we’ll work with the file sample.txt with the following content: Hello from JournalDev This is a sample file This is the last line. Using fseek() in C / C++ – Some Examples In our first example, we’ll use fseek(), along with fread(), to read from an existing file. We’ll move the pointer to the start of the file, and place the offset at a dsitance of 5 positions. offset = 5 #include <stdio.h> int main() { // Open the file FILE* fp = fopen("sample.txt", "r"); // Move the pointer to the start of the file // And set offset as 5 fseek(fp, 5, SEEK_SET); char buffer[512]; // Read from the file using fread() fread(buffer, sizeof(buffer), sizeof(char), fp); printf("File contains: %s\n", buffer); // Close the file fclose(fp); return 0; } Output File contains: from JournalDev This is a sample file This is the last line. As you can see, it only starts reading from position 5, after the first 5 characters. So we do not see Hello Now, we’ll move the pointer to the end, using SEEK_END. We’ll append to the same file, by using fwrite() at the end! #include <stdio.h> int main() { // Open the file for writing FILE* fp = fopen("sample.txt", "a"); // Move the pointer to the end of the file fseek(fp, 0, SEEK_END); char text[] = "This is some appended text"; // Write to the file using fwrite() fwrite(text, sizeof(buffer), sizeof(char), fp); printf("Appended:%s to the file!\n", text); // Close the file fclose(fp); return 0; } Output Hello from JournalDev This is a sample file This is the last line. This is some appended text Indeed, we were able to append the text successfully to the file! Conclusion We learned about using the fseek() function in C / C++, which is quite useful if you want to shift the file pointer. References - Linux manual page on the fseek() function in C
https://www.journaldev.com/40749/fseek-function-c-plus-plus
CC-MAIN-2021-17
refinedweb
623
80.41
[Advanced Search] ****************************************************************************** RELEASE NOTES FOR HEASOFT 6.0.2 August 9, 2005 ****************************************************************************** The HEASOFT 6.0.2 release is primarily driven by the release of the Swift software version 2.1, but includes some minor updates to other parts of HEASOFT, notably the 'xselect' and 'extractor' utilities. The updates for Swift 2.1 will require users to completely replace any previous HEAsoft distribution before installing the 6.0.2 software. For reference, the original Release Notes for HEASOFT 6.0 are included below, after this introductory section describing the major updates included in HEASOFT 6.0.2: Swift software v 2.1: Attitude -------- aspect allow CALDB for alignment file parameter coordinator allow CALDB for telescope definition file parameter write TNULLn (for SKY) and TCUNIn keywords prefilter note: there was a change to the atFunctions library that results in a ~0.5 degree shift in Greenwich sidereal time (and parameters derived from it, e.g., longitude) Swift/bat --------- batbinevt * enhanced and corrected time filtering for survey (DPH) data * user can specify ebins=INFILE or ebins=FILEBINS for consistency * output GTI now has proper exposure keywords * enhanced error messages when CALDB files are missing * when transferring input file keywords to output, delete some confusing ones like LIVETIME, ONTIME, etc * bug fixes batcelldetect * significant performance enhancements when doing PSF fitting * more image projection types are now allowed (change from CFITSIO WCS routines to WCSLIB) * NEW PARAMETER keepbits allows output images to be gzipped significantly * output images now have useful extension names * multiple variance maps are written if there are multiple input images * when PSF fitting, the width of the PSF is held fix unless explicitly set in the NEW PARAMETERS psffwhm and psftopwidth. * improved handling of non-tangent plane projection images and images in galactic coordinates * other enhancements and bug fixes batclean * balancing is now performed when outversion='fit' * vector SNR columns are now handled * range images are now handled (correct statistics) batdrmgen * added a better gain correction method, to better match calibration peaks baterebin * BUG FIX in cases where the output EBOUNDS extension has the wrong number of rows * all extensions of the input are copied to the output now * warnings are error checks were revised batfftimage * NEW PARAMETER keepbits allows output images to be gzipped significantly (and is set by default) * enhanced error messages when CALDB files are missing * for pixels below the partial coding threshold, the significance is now always set to NULL * NEW PARAMETER 'time' allows user override of the image time bathotpix * NEW PARAMETER 'row' to allow selective image processing for multi- image files batmasktaglc * BUG FIX for handling of pipeline detector enable/disable maps batmaskwtevt * additional error checking * enhanced error messages when CALDB files are missing batmaskwtimg * enhanced error messages when CALDB files are missing * task now uses the image MIDPOINT TIME instead of start time in order to be more consistent with batfftimage * additional error checking and documentation updates battblocks * NEW PARAMETER coalescefrac, to improve the robustness of the first and last time bins Swift/gen --------- swiftxform added allempty, extempty parameters to control when an empty primary array is written to the output swifttime *NEW TOOL* Simple conversion of Swift timestamps between various time systems and formats, including optional spacecraft clock correction factor. Swift/uvot ---------- ALL default value of calibration file parameters set to CALDB parameter values of CALDB result in the derived path being written to the parameter history keywords uvotdetect if the fraction of zeros in the background exceeds 10%, the background and variance are calculated externally and passed to SExtractor uvotevgrism updated grism calibration file format uvotimgrism updated grism calibration file format write position of zeroth order to output files uvotimsum allow a numbered range of HDUs to be excluded from the sum uvotmag write the coincidence loss corrected rate and error to RATE_COR and RATE_COR_ERR columns do not include zero point error in magnitude/flux errors uvotpict use PostScript level 1 modified label spacing uvotsequence *NEW TOOL* List and visualize UVOT observing sequences: determine the times of each Swift snapshot and map each image to to a particular snapshot. Results are provided as standard output. Optionally, the results may be plotted. uvottfc added option to smooth sparse image Swift/xrt --------- NEW TASK: * xrtwtcorr - Correct on-ground the bias for data taken in Windowed Timing mode using the last twenty pixels of each frame telemetred with the new version of the flight software (v.8.9) Major changes on tasks: * xrtflagpix - Update to flag Photon Counting events on pixels located on the burned spot regions when the frame temperature is higher than the input reference temperature. New input parameter 'maxtemp'. - Do not set to 0 all bits of the STATUS column but only the ones used by this task when 'overstatus=yes'. - Add the possibility to do not flag calibration sources when 'srcfile' set to 'none' * xrthkproc - Add a correction to the time tag algorithm of the Windowed Timing Mode (XRT-PSU-037 document): the time to transfer a single row is subtracted, as requested at the XRT Team Meeting - June 28-29, 2005 Leicester. - Set all columns of the last dummy row to NULLs except TIME * xrthotpix - Do not set to 0 all bits of the STATUS column but only the ones used by this task when 'overstatus=yes' * xrtimage - Update to handle version 8.9 of the on-board software (on-board bias subtraction applied) and add the capability to apply a correction to the bias subtracted on-board. Added 'hdfile' and 'biasdiff' input parameters. - Implemented routine to take into account pixels in the burned spot region when the frame temperature is higher than the input reference temperature. New input parameter 'maxtemp'. * xrtpdcorr - Update to handle new Housekeeping Header Packet file format (on-board software v. 8.9) - Add possibility to correct the bias subtracted on-board for Low Rate Mode using the last twenty pixels of each frame telemetred. New input parameter 'biasdiff'. * xrttimetag - Merge GTIs of consecutive frames - Add a correction to the time tag algorithm of the Windowed Timing Mode (XRT-PSU-037 document): the time to transfer a single row is subtracted, as requested at the XRT Team Meeting - June 28-29, 2005 Leicester. New input parameter 'trfile'. * xrtpipeline - Several modification to support the current build. General ------- barycorr - now uses 'CALDB' as the default parameter for the Swift clock correction file. The legacy ascii file ("swco.dat") is also still supported, however. HEASARC ------- extractor - v4.54 - Fix so that wtmapfix=yes writes -1 to pixels outside the region even when the image and wmap coordinates are the same. xselect - sensitivity to case of mission, instrument, etc. removed. ****************************************************************************** RELEASE NOTES FOR HEASOFT 6.0 April 12, 2005 ****************************************************************************** SUMMARY HEASOFT 6.0 is a software suite consisting of SWIFT 2.0, HEATOOLS 1.1, FTOOLS 6.0, FV 4.2, XIMAGE 4.3, XRONOS 5.22, XSPEC (version 12.2.0 and 11.3.2), and XSTAR 2.1kn3. This document contains notes about significant changes made since the last major release of each. All this software shares common build, installation and initialization procedures. Please see the file HEASOFT-INSTALL.TXT for details. Please see README.CYGWIN if you are planning to install in the Cygwin environment. New in HEASOFT 6.0 is the HEADAS build environment which replaces earlier versions of configuration files and Makefiles as used in HEASOFT 5.3.1. The build procedure remains basically the same however, i.e. modeled after a typical GNU software distribution. For each of the HEASOFT subpackages below are listed tools and/or features which are new in version 6.0. More information on any of these packages. FV, XIMAGE, and XSPEC all have online help available from within the program. In addition, XIMAGE, XSPEC, and XSTAR have TeX formatted manuals. ****************************************************************************** XPI / PIL ****************************************************************************** - In the HEADAS framework, all new tools use PIL (the ISDC Parameter Interface Library) instead of XPI (which is still used by the older FTOOLS). New versions of all the parameter utilities (except for pquery), that is: pget plist pquery2 pset punlearn have been written using PIL, and are now the default versions you will find in your PATH after HEASOFT initialization. These new versions are intended to reproduce the behavior of the xpi versions, but by virtue of using PIL there will be some differences. If you find that you have a need for the older XPI utilities, you can still build them in the following way: after building HEASOFT, cd headas/ftools/xanlib/xpiutils and then "hmake xpitools" followed by "hmake xpitools-install". The xpitools will be named with an "xpi" prefix, e.g. "xpipget". ****************************************************************************** ASCA ****************************************************************************** - ascaarf: Fixed bug when accumulating the gis detector efficiency. Also, decreased run time on Linux. - ascascreen: Fixed problem with output format with some versions of Perl. - cleansis: Modified to work on event files from instruments other than the ASCA SIS. The behaviour with ASCA SIS event files is unchanged. For other missions, cleansis gets the number of chips using the TLMIN/MAX of the column specified by the chipcol parameter and the image size from the TLMIN/MAX of the column specified by the rawxcol parameter. Improved error handling if TLMIN/MAX keywords are missing for RAWX/Y or chipcolumns. - mkgisbgd: Fixed problem which caused the background image in the primary to be corrupted when the GIS image bin size was 64x64 (which is different from the default 256x256). The background spectral data in the first extension and other images in the second and third extensions were correct. ****************************************************************************** CALTOOLS ****************************************************************************** - rsp2rmf: now available on Linux (now builds with g77/f90) - genrsp: Corrected bug when reading from a file resolution information for a dispersive instrument. Improved the diagnostic error if the user has given an invalid value for the res_reln parameter. - caldbinfo: Modified to handle Swift/XRT namespace conflict with older XRT instruments. - cmprmf: Modified to update the value of the LO_THRES keyword in the output rmf file. - rbnrmf: Fixed bug which generated a segmentation fault in the case where ebdfile=rmffile. - udcif: Updated to allow users the option of including a file to be indexed in the CALDB even if it seems to be a "duplicate" of another entry in the caldb index file. This is useful since the determination of a duplicate did not previously check for differences in the CBD block, which can be used to distinguish files. ****************************************************************************** CFITSIO ****************************************************************************** CFITSIO 3.001: Major changes have been made to the CFITSIO library in this release to fully support large FITS files which may have size parameters that are larger than 2**31 = 2147483648, i.e., values that are larger than can be represented by a 32-bit signed integer. CFITSIO now supports FITS files that have: - integer FITS keywords with absolute values > 2.1E09 - FITS files with total sizes > 2.1 GB - FITS tables with more than 2.1E09 rows - FITS images with an axis length larger than 2**31 pixels To support this change, the data type of the integer parameters in many of the CFITSIO subroutines has been changed from a 32-bit 'long' to a 64-bit 'LONGLONG'. Fortunately this change is transparent to existing software that only calls the public CFITSIO routines (as defined in the fitsio.h include file) because the C compiler and the cfortran.h macros called by Fortran programs will automatically perform the data type conversion between 32-bit and 64-bit values as necessary. Existing software should run exactly as before without any modification, but to fully take advantage of this new support for large FITS files some applications might need to be modified to use 64-bit integer local variables to store parameters such as the image axes sizes, or table row numbers. There are a number of CFITSIO subroutines which pass the address to an integer parameter which could not be modified to support 64-bit integers without breaking existing software, so in these cases a new subroutine has been added to the CFITSIO library to specifically support 64-bit integer arguments. This backward software compatibility of CFITSIO does NOT apply to any application programs that include the fitsio2.h file and then make calls to some of the privately defined routines inside the CFITSIO library. Some of these internal routines are not compatible with previous versions of the library (especially those routines that now pass a 64-bit integer parameter by reference), so programmers that make use of fitsio2.h should carefully check which routines are called and make appropriate changes to the calling routine if necessary. It is possible (and even likely) that some of these internal CFITSIO routines will continue to change in future releases, so programmers are strongly discouraged from using fitsio2.h or calling any of the undocumented internal routines in CFITSIO. Other significant changes to CFITSIO in this release are: - New functions have been added to the lexical parser (as used by tasks such as fselect and fcalc), including MEDIAN, AVERAGE, STDDEV, ACCUM (accumulative sum), and ANGSEP (compute angular separation). Do 'fhelp calc_express' for more details. - Improved the routines that write 'DATE' keywords to rigorously verify that the input day, month, and year values are valid. ****************************************************************************** FV / POW ****************************************************************************** FV Version 4.2: New features/Bug fixes since V4.1.4 include: Add X axis range selection utility in POW. Add capablity to display 4D table and image (movie). Add capablity to create/display region on image. Add XPA entry points for X axis range selection and region display. Fix one dimension plot/image label problem. Update logic for display CAR projection. Rework background backup directory cleanup logics. Fix ploygon creation problem, allow right mouse button to drag the existing vertex, left mouse button for creation and move the region. Add color changes capability in Edit Region Panel. Change vizier/skyview/catalog file naming scheme. ****************************************************************************** FIMAGE ****************************************************************************** - chimgtyp: modified to handle NANs in input image. - fimgmerge: modified to handle null or NAN values in input image. ****************************************************************************** FUTILS ****************************************************************************** - fpartab: fixed bug in writing values to logical ('L') columns - fdiff: fixed problem with comparison of string and logical table columns. Fixed erroneous reporting of differences when two files have different numbers of extensions. Fixed bug which caused the 'DATE' keyword to not be excluded from the file comparisons. - fverify: fixed to eliminate bogus warnings when verifying floating point FITS images that have been tile-compressed. Modified to accept '-' as the input file name, meaning read file from stdin. Added compile option to force all error messages to go to stdout instead of stderr. Modified to support testing for bytes at the end of LARGE files (>2.1 GB). Modified to print out error if file doesn't exist or is not a FITS file, in the case where prstat=no. ****************************************************************************** HEASARC ****************************************************************************** - Extractor: Filtering Modified the code to interpret filters attached to the event filename to allow multiple ranges (eg filename.evt[GRADE=0:0 2:2]). Added support for region files with sizes given in arcminutes ('), degrees (d), or radians (r). Fixed bug in handling of DSVAL keywords that must be removed when filtering on the column set by the ccol parameter. Cleverer handling of region extensions. Now looks in a region extension to see what coordinates are used. If the current filtering is with a different set of coordinates then a new region extension will be created (and the old ones copied). This fixes a bug which showed up with ASCA GIS data - the standard event files are filtered in detector space and then the user will often follow up by filtering in image space. Trapped NULL values of the TIME column and dealt with them correctly (ie rejected the event). Output Files Added a boolean parameter lctzero. The default is yes which provides the previous behaviour. Setting lctzero=no will cause the times in the lightcurve to be written in spacecraft time units and the keyword TIMEZERO to be set to zero. Added WCSNAMEP keywords for image and WMAP. Removed setting of TIMEREF to 'local' in W_FBLC. This keyword is set to the correct (propagated) value in WSTDKY. Modified the way the EXPOSURE keyword is calculated when the ccol parameter is set if multiple GTI extensions contribute to the output. The old way was to calculate the exposure for each GTI extension and average. This does not work for the UVOT event files where the exposures must be summed. The new version combines individual ranges from all the selected GTI extensions and uses the merged list to calculate an exposure. This does the correct thing for the UVOT case. For missions with multiple CCDs if products are required for spatial regions covering more than one CCD the exposure may be slightly different. However, if the old and new ways give very different exposures then it is not correct to combine data across the CCDs anyway. Propagated the value of the RADECSYS keyword from the event file to the output files. Note that the spectrum file was always being written with RADECSYS='FK4' which was usually wrong but didn't matter. - fovdsp (new in this release): display ISDC format reference catalog and INTEGRAL or Swift field of viewson either Galactic or Equatorial coordinates, aitoff or tangential projections. Also, FITS images may be displayed. - gisxspec (new in this release): Do spectral analysis for gis data. - makeregion (new in this release): Make a region file for an image with POWplot - make2region (new in this release): Create two region files for an image through POWplot. - pspcxspec (new in this release): Perform spectral analysis for PSPC data - selectxrange (new in this release): Make an x range file for an event fits file with POWplot. - spibkg_init: Relative source weighting parameters are now entered as floating point rather than as strings. Some refinement to the background initialization algorithm was also made. In particular, some time-dependent background structures appearing on given detectors are detected and more accurately modeled. ****************************************************************************** HEATOOLS ****************************************************************************** - The new package 'HEAtools' has been distributed in recent Swift software releases and as a beta version, but this marks its first inclusion in a major release of HEASOFT. The HEAtools package represents a "next generation" of FTOOLS, which currently reproduce the functionality of existing FTOOLS, and as such, the HEAtools package is not necessary if you already have FTOOLS. However, having been written entirely in ANSI C for maximum portability, the streamlined HEAtools should also be more stable and provide faster results than their predecessors. For more information, see: ****************************************************************************** INTEGRAL ****************************************************************************** New package containing: - spibkgsubtrct: produces a background subtracted spectrum resulting from an initial XSPEC analysis of INTEGRAL/SPI data. ** NOTE: in order to compile spibkgsubtrct you must download the Xspec 12 source code in addition to the INTEGRAL package. - varmosaic: carry out mosaic of flux images with weighting of the accompanied variance images. Primarily intended for INTEGRAL ISGRI and JEMX instruments, but may be used for other coded mask instruments. ****************************************************************************** ROSAT ****************************************************************************** - pcexpmap: Fixed an array out of bounds condition which caused a segmentation fault under rare conditions. ****************************************************************************** SWIFT ****************************************************************************** Swift software v 2.0: ------------------ BAT specifc tools ------------------ batbinevt - Correct name of POISSERR keyword in output spectra INTERFACE CHANGE: Allow the user to delete bins with zero counts. Before, the task always deleted these bins, in order to prevent battblocks from crashing. ('delzeroes' parameter) Some warning messages for unlikely user configs. batcelldetect - INTERFACE CHANGE: New parameter 'hduclasses' which allows filtering of input images by HDUCLASn keywords. Now able to read an image without an auxiliary tangent plane coordinate system. Input catalog can have galactic coordinates (if the image is also in galactic coordinates) Output images now have proper HDUCLASn keywords batclean - Can now enter custom background models via the 'bkgmodel' parameter. INTERFACE CHANGE: Can now rebalance the focal plane image before or after cleaning, with various rebalancing options ('balance' and 'balancefirst' parameters). INTERFACE CHANGE: Can now ignore certain sources in the catalog via the 'ignore' parameter. Fixes and simplifications to the way the background model is applied. New CLEANLEV output keyword keeps track of the generation level of cleaning. batdrmgen - INTERFACE CHANGE: A new hidden parameter called "row" has been added that lets the user specify which row of a pha file a response matrix should be made for, in cases where there are multiple spectra in a pha file. The task will search for columns first, then keywords, with the following names: BAT_XOBJ, BAT_YOBJ, BAT_ZOBJ, PCODEFR, NGOODPIX, and MSKWTSQF. Before, it only searched for keywords. INTERFACE CHANGE: A correction function has been added to improve the response to the Crab spectrum. This is mediated by the "fudge" parameter (which until now has been ignored by batdrmgen). (Other changes are pretty transparent to the user, such as using finer binning to improve the response fidelity and changing the model of the passive material transmission to include the Ag and Au edges.) batcovert - INTERFACE CHANGE: Added cubic-residual ADU to energy conversion, with residuals computed by ADU or by corresponding DAC value. 'calmode' parameter options now include INDEF, LINEAR, QUADRATIC, CUBIC, DIRECTCUBIC, or FIXEDDAC. Except when using the LINEAR correction, residfile must be compatible with the calmode selection. FIXEDDAC is the recommended value for normal energy computation. Most users can enter INDEF, and use CALDB for the residfile. LINEAR is provided in order to reproduce the flight software energy calculation. The others are provided primarily for testing. A scaling bug that had a minor effect on results (because gains and offsets have changed very little) was fixed. A bug that prevented use of the rarely-used zeroit option was fixed. batrebin - Added cubic-residual ADU to energy correction, with residuals computed by ADU or by corresponding DAC value. The method is selected by selecting a residfile. Using the current CALDB file will apply the cubic-residual correction at fixed DAC values. Fixed an error that shifted all energy bins by approximately 0.1 keV (0.5 ADU). batfftimage - INTERFACE CHANGE: Now can compute the theoretical variance and significance maps via the 'bkgvarmap' and 'signifmap' parameters. KNOWN BUG: output significance images contain bogus values in the "dead" areas. WORKAROUND for source detection: use 'pcodefile' option of batcelldetect. Output images now have proper HDUCLASn keywords batmasktaglc - INTERFACE CHANGE: A new hidden parameter has been added to "batmasktaglc." The parameter is "scale" which is a floating point value which is used to scale the input raw mask-tagged counts. This is to allow a correction to the input light curve due to any errors in how they are generated. The user must determine the correction factor. batmaskwtevt - Add additional error checking batupdatephakw - NEW with this release ------------------- UVOT specific tools ------------------- uvotdetect - TDISPn keywords modified to report 2 more digits RA,DEC Binning dependent SExtractor configuration files Option to display detected sources uvotimsum - Write RADECSYS and EQUINOX keywords to output image Fixed bug: was not propagating sufficient keywords to output uvotmag - TDISPn keywords modified to report 2 more digits RA,DEC uvotpict - Modified graphical output uvotproduct- NEW with this release uvotscreen - Pass copyall=yes to extractor to retain WINDOW extension Write parameter history to output uvotsource - NEW with this release uvotstarid- TDISPn keywords modified to report 2 more digits RA,DEC Write NULL MAG_DELTA when catalog magnitude is not available Separate parameters to limit position and rotation corrections Index objects loaded from catalog uvottfc- Corrected postage stamp size indicator in output source table ------------------- XRT specific ftools ------------------- xrtcalcpi - Calculate PI dependent on the CCD temperature using a new CALDB 'gain' file format and the XRT science packet header File. New input parameter 'hdfile'. xrtevtrec - Update to read the event and split thresholds from the science packet header which records the on-board settings as a function of temperature. This change is in support of the upcoming FSW. If the event and split thresholds parameters are negative, the event and split values are read from the header file otherwise xrtevtrec uses any input positive value as before. New input parameter 'hdfile'. xrtfilter - Add the columns 'TEN_ARCMIN, SETTLED, ACS_SAA, SAFEHOLD' to the output makefilter file. These are derived from 'FLAGS' of the attitude file. This is to align the makefiler file generated by the xrtpipeline to the one stored in the archive. xrtflagpix - Add the capability to flag event as 'bad' in the Photon Counting mode when the central pixel is below the event threshold. The default threshold is set to 80. New input parameter 'phas1thr', if set to 0 the check on the central pixel is not applied. xrthkproc- Add the capability to use for the source position the detector coordinates. By default the task is run with the detector coordinates of the center of the detector (300,300). New input parameters 'srcdetx','srcdety'. xrthotpix - Add the new input parameter 'usegoodevt' to include/exclude from the search of hot and flickering pixels events set 'bad' by 'xrtflagpix' when the central pixel is below threshold. xrtpcgrade - Update to read the split threshold from the science packet header which records the on-board settings as a function of temperature. This change is in support of the upcoming FSW. If the split threshold parameter is negative, the split threshold value is read from the header file otherwise xrtpcgrade uses any input positive value as before. New input parameter 'hdfile'. xrtpipeline - New input parameter 'obsmode' to process ALL obs_modes or only the one selected. This allows to specify different user input screening selection for the different observing modes. 'obsmode' values are (ALL,POINTING, SLEW, SETTLING). The "ROTIME" column is now maintained in the Lev 1a file for the Timing modes to allow the calculation of the LLD and SPLIT temperature dependent parameters. The "ROTIME" is now deleted in the Lev 2 files. The parameter 'pilow' is set to 30 in the xrtproducts call. New input parameter 'usesrcdethkproc' to allow users to process the header science packet file (xrthkproc task) using detector or sky coordinates. New input parameter 'evtfromarc' to allow to get input event files from the archive or from the users output directory. This is to allow users to apply only data screening or products generation (stage 2 and/or stage 3 of the pipeline) to the Calibrated Level 1 event files in the archive. New input parameter 'createmkffile' to enable/disable the generation of the Filter file ('.mfk' file). If the 'createmkffile' is set to 'no' the pipeline uses the Filter file taken from the archive. xrtproducts- Changed image plot energy band to 0.5-10keV (PI=50-1000) Replaced xspec call with xspec11 Added check on 'ra' and 'dec' input parameters if the input 'regionfile' is not set to 'default'. xrtscreen - Read a new format for the CALDB file where now the good values of the hk parameter are listed together with syntax to use in the screening expression. Add a new input parameter 'obsmodescreen'. If the parameter is set to yes (default) add to the GTI expression, the parameters that specify the observing mode (i.e.: "SETTLED==0&&TEN_ARCMIN==0" for data taken during slew, "SETTLED==0&&TEN_ARCMIN==1" for data taken in 10 arcmin from the source, "SETTLED==1" for data in pointing). These parameters are not added to the expression when generating GTI for the Image mode. xrttam - The extension ACS_DATA from the input attitude files is appended to the TAM corrected attitude output of xrttam. This extension is used by xrtfilter. xrttimetag - Add the capability to use for the source position the detector coordinates to time tag the events. The default processing uses the source sky coordinates. New input parameters 'srcdetx','srcdety', 'usesrcdet'. ------------------ Swift General tool ------------------ swiftxform - MJDREFI was mis-spelled causing a warning about missing MJDREF keywords Support attfile=CONST:* ------------------------------------------- BAT known Issues on the analysis software ------------------------------------------- - batfftimage and batmaskwtimg: incorrect derived attitude Task: Imaging tasks (batfftimage and batmaskwtimg) Version: All versions What Builds: All builds Problem: Attitude may be incorrect for observation with many snapshots Status: Open Two imaging tasks assume that the spacecraft attitude is fixed during an observation: batfftimage (to make sky images) and batmaskwtimg (to make mask weighting maps for flux extraction). Both tasks take the attitude at the '''MIDPOINT''' of the observation start/stop times. If there are gaps in the observation, i.e. multiple snapshots, then it is possible, even likely, that the midpoint time will fall within a gap. When this happens, the attitude may be erroneously interpolated. There are two workaround solutions, * analyze one snapshot at a time. Or; * use the 'aspect' tool to generate a revised attitude file. As of Swift 1.2, 'aspect' can create a new attitude file based on the median pointing direction during the observation good times only. You will also need to supply a good time interval extension to 'aspect', which should be available in the detector or sky images. In the future the BAT team will investigate how feasible it is to incorporate 'aspect'-like functionality into the BAT imaging tools. - Analysis: Earth and Moon Occultation Task: Flux extraction tasks (batmaskwtevt,batmaskwtimg,batfftimage) Version: All versions What Builds: All builds Problem: Earth and moon may block parts of the BAT field of view Status: Open The BAT field of view is large, approximately 120 x 60 degrees fully coded. The current spacecraft constraint excludes the sun with a 45 deg constraint cone, and the earth limb and moon with 30 degree constraints each. Even so, the Earth and Moon may enter the BAT field of view. This will most commonly occur at edges of the "long" BAT axis (i.e. large IMX in the image plane). The effect will be to occult the flux of sources in that part of the sky. Since the Moon and (primarily) the Earth move as a function of time, the blockage may have the effect of '''reducing,''' but not totally eliminating the source on-time. Example: in a 2000 second image, Sco X-1 might be blocked during the final 50 seconds. Users need to take special care regarding occultations, especially by the Earth. Tools are in process to make this job easier. These will involve image corrections for full images, and good time interval filters, for time selection of non-occulted data. - Mask tagged light curve systematic flux errors Task: batmasktaglc Version: All (bug is in BAT flight software) What Builds: All Problem: There are systematic flux errors in mask tagged light curves Status: Open Discussion: BAT mask tagged light curves are generated on board by the BAT flight software. The on-board process involves generating a mask weight map via ray tracing (similar to the ground task batmaskwtimg). On the ground, the raw light curves require further processing before they are scientifically meaningful. The "mask tagging" process requires that the source position be known in advance, in instrument coordinates. This transformation requires knowledge of the spacecraft attitude, the instrument-to-spacecraft alignment, and the source celestial coordinates. A bug has been found in the BAT flight software which makes the incorrect transformation. The ray-traced position used for generating the mask weight map is several arcminutes off of the true position. The point spread function is thus sampled significantly off-peak; this irretrievably reduces the signal to noise of the mask weighted light curve fluxes. Also, the flux itself is underreported compared to its true value. Ground software fixes are being investigated that would correct the flux to a truer value (although with larger error bars). A long term solution to fix the flight software is also being investigated, but may not be feasible. ****************************************************************************** TIME ****************************************************************************** - maketime: Implemented new prefr/postfr scheme: new filter files created by makefilter (or newmakefilter) need prefr=0.0 and postfr=1.0, and should have the PREFR & POSTFR keywords set. If the PREFR/POSTFR keywords are not found, the standard value of 0.5 is used for each (for e.g. older missions). The user may override either of these choices by entering prefr/postfr values on the command line. ****************************************************************************** XIMAGE ****************************************************************************** XIMAGE VERSION 4.3 For full documentation on XIMAGE, refer to: * Better handling of images with no sky coordinates * Improved GTI merging capability * New command uplimit, which calculates an upper limit * New options + ccorr -> symlwidth - Specify symbol's line width for guide source markers + read_image -> gtiext - Specify extension to read GTI from event file + wcs -> frameid - Switch between image coordinate frames For complete change history of XIMAGE see: ****************************************************************************** XRONOS ****************************************************************************** - earth2sun: Fixed uninitialized variable that resulted in incorrect results on Darwin (Mac OS X). - rbf2fits: compiles with g77 & f90 now. ****************************************************************************** XSELECT ****************************************************************************** Added offset switch option to extract curve. Offset=no now creates a lightcurve whose times are given in spacecraft units. This differs from the default which writes out times relative to the start bin. Added filter column option which allows filtering on event attributes which are included in columns in the event file. At the moment allows filters of the form "NAME=val:val val:val..." because these are handled by the extractor key filtering. Note that this filter can be used on Astro-E2 XRS data to get events from subsets of pixels. Replaced getting MJDREF from the MDB by reading MJDREF[F/I] and TIMESYS from the event file. These are used when time filtering based on UTC or MJD ranges. Now includes conversion from UTC to TT if TIMESYS='TT'. Allowed "select expref" as an alias for "select chip". Updates to MDB for Swift, Integral, and Astro-E2 XRS. ****************************************************************************** XSPEC ****************************************************************************** *** Note that this release includes both xspec v12 (invoked with "xspec") and xspec v11.3.2 (invoked with "xspec11"). xspec v12 is not currently supported on the OSF/alpha or Cygwin platforms, so for those platforms only, invoking "xspec" will call xspec11. ************** XSPEC v12.2.0: ************** Xspec v12 is a major revision from v11. The core of the program is now primarily written in double precision ANSI C++ instead of single precision Fortran77 and its internal design, layout, and data structures, have been reorganized into an object-oriented framework using design patterns and generic programming techniques made available with C++. Most of the xspec models library however has been retained in its original Fortran77 code. Some of the enhancements, in particular the use of multiple models described below, require corresponding changes in command-line syntax. However, with few exceptions the program remains fully backward-compatible with xspec v11, and previously existing user scripts are expected to run in v12 with little or no modification. However, compatibility issues are possible with this initial release, and we welcome any user reports to help us correct these and other bugs. Key modifications: The implementation of user models has been rewritten to allow users to write models not only calculated in single precision Fortran77, but double precision Fortran77, C, and C++. Further, XSPEC can now be used as a development environment for user models by allowing recompilation and plotting from the XSPEC command prompt. A spectrum can be fit with the sum of separate models each having its own response matrix. This feature is useful for analyzing data from coded aperture masks. A new internal dynamic expression implementation allows more complex (multiply-nested) models, and also allows parameter links to be polynomial functions of one or more parameters. A ctrl-c breaking mechanism has been implemented to allow early exit from the more time-intensive tasks such as fitting and error calculations. An "undo" command has also been added to return the program to the state before its most recent command was issued. The CERN Minuit/migrad algorithm has been better integrated into the code and its documentation is now directly accessible to the user during XSPEC sessions.Type II (multi-spectrum) OGIP files are now fully supported. Multiple ranges can be selected in the data command, and support is present for Type II background and arf files. Observation simulations (the fakeit command) now operate on Type II inputs. The online documentation scheme is now implemented using pdf files, replacing the older VMS-style help system. The help scheme can be configured to use external applications such as Adobe Acrobat or the xpdf readers. Users can document their own local models and tcl-scripted procedures in pdf files and add them to the help system. For a more complete list of modifications, enhancements, and descriptions, please refer to the xspec12 manual. ************** XSPEC v11.3.2: ************** This is mainly a bug-fix update of 11.3.1 with the addition of three new models. We are continuing to distribute v11 to aide comparisons with the new v12 however all future development work will occur on v12. We will however attempt to supply fixes for any serious bugs that show up in v11. Changes to commands "tclout simpars" returns a list of simulated parameter values for the model in use (based on the covariance matrix at the end of the last fit). Changes to models All the models that use the mekal code (eg mkcflow) can be made to use the APEC code by setting the switch parameter to 2. The APEC versions of NEI models are modified by the APECTHERMAL and APECVELOCITY variables in the same way as the standard APEC models. Models added compbb - Comptonization model of Poutanen and Svenson. ezdiskbb - Replacement for diskbb with a zero-torque inner boundary condition (Zimmerman, Narayan & McClintock). kerrbb - Multi-temperature blackbody for thin accretion disk around a Kerr black hole (Li et al. 2005). zredden - Redshifted version of redden. ****************************************************************************** XSTAR ****************************************************************************** Xstar Version 2.1kn3 (April 2005): - Two conservation nor thermal equlibrium is calculated. ****************************************************************************** MISC ****************************************************************************** - A final note about libraries - new in this release (relative to HEASOFT 5.3.1) are: atFunctions 2.3 readline 4.3 SLALIB version 2.4-13 (contained in xanlib) Web page maintained by Bryan K. Irby
https://heasarc.gsfc.nasa.gov/docs/software/lheasoft/RelNotes_602.html
CC-MAIN-2015-22
refinedweb
6,239
53.71
5. Zope Products Attention This document is currently being reviewed and edited for the upcoming release of Zope 4. 5.1. Introduction In this chapter we are looking at building Python packages that are Zope Products. Products most often provide new addable objects. 5.2. Development Process This chapter begins with a discussion of how you will develop products. We’ll focus on common engineering tasks that you’ll encounter as you develop products. 5.2.1. Consider Alternatives Before you jump into the development of a product you should consider the alternatives. Would your problem be better solved with External Methods, or Python Scripts? Products excel at extending Zope with new addable classes of objects. If this does not figure centrally in your solution, you should look elsewhere. Products, like External Methods allow you to write unrestricted Python code on the filesystem. 5.2.2. Starting with Interfaces The first step in creating a product is to create one or more interfaces which describe the product. See Chapter 2 for more information on interfaces and how to create them. Creating interfaces before you build an implementation is a good idea since it helps you see your design and assess how well it fulfills your requirements. Consider this interface for a multiple choice poll component (see Poll.py): from zope.interface import Interface class IPoll(Interface): """A multiple choice poll""" def castVote(index): """Votes for a choice""" def getTotalVotes(): """Returns total number of votes cast""" def getVotesFor(index): """Returns number of votes cast for a given response""" def getResponses(): """Returns the sequence of responses""" def getQuestion(): """Returns the question""" How you name your interfaces is entirely up to you. Here we’ve decided to use prefix “I” in the name of the interface. 5.2.3. Implementing Interfaces After you have defined an interface for your product, the next step is to create a prototype in Python that implements your interface. Here is a prototype of a PollImplemtation class that implements the interface you just examined (see PollImplementation.py): from poll import Poll class PollImplementation: """A multiple choice poll, implements the Poll interface. The poll has a question and a sequence of responses. Votes are stored in a dictionary which maps response indexes to a number of votes. """ implements(IPoll) def __init__(self, question, responses): self._question = question self._responses = responses self._votes = {} for i in range(len(responses)): self._votes[i] = 0 def castVote(self, index): """Votes for a choice""" self._votes[index] = self._votes[index] + 1 def getTotalVotes(self): """Returns total number of votes cast""" total = 0 for v in self._votes.values(): total = total + v return total def getVotesFor(self, index): """Returns number of votes cast for a given response""" return self._votes[index] def getResponses(self): """Returns the sequence of responses""" return tuple(self._responses) def getQuestion(self): """Returns the question""" return self._question You can use this class interactively and test it. Here’s an example of interactive testing: >>> from PollImplementation import PollImplementation >>> p = PollImplementation("What's your favorite color?", ... ["Red", "Green", "Blue", "I forget"]) >>> p.getQuestion() "What's your favorite color?" >>> p.getResponses() ('Red', 'Green', 'Blue', 'I forget') >>> p.getVotesFor(0) 0 >>> p.castVote(0) >>> p.getVotesFor(0) 1 >>> p.castVote(2) >>> p.getTotalVotes() 2 >>> p.castVote(4) Traceback (innermost last): File "<stdin>", line 1, in ? File "PollImplementation.py", line 23, in castVote self._votes[index] = self._votes[index] + 1 KeyError: 4 Interactive testing is one of Python’s great features. It lets you experiment with your code in a simple but powerful way. At this point you can do a fair amount of work, testing and refining your interfaces and classes which implement them. See Chapter 9 for more information on testing. So far you have learned how to create Python classes that are documented with interfaces, and verified with testing. Next you’ll examine the Zope product architecture. Then you’ll learn how to fit your well crafted Python classes into the product framework. 5.2.4. Building Product Classes To turn a component into a product you must fulfill many contracts. For the most part these contracts are not yet defined in terms of interfaces. Instead you must subclass from base classes that implement the contracts. This makes building products confusing, and this is an area that we are actively working on improving. 5.2.5. Base Classes Consider an example product class definition: from Acquisition import Implicit from Globals import Persistent from AccessControl.Role import RoleManager from OFS.SimpleItem import Item class PollProduct(Implicit, Persistent, RoleManager, Item): """ Poll product class """ ... The order of the base classes depends on which classes you want to take precedence over others. Most Zope classes do not define similar names, so you usually don’t need to worry about what order these classes are used in your product. Let’s take a look at each of these base classes. 5.2.5.1. Acquisition.Implicit This is the normal acquisition base class. See the API Reference for the full details on this class. Many Zope services such as object publishing and security use acquisition, so inheriting from this class is required for products. Actually, you can choose to inherit from Acquisition.Explicit if you prefer, however, it will prevent folks from dynamically binding Python Scripts and DTML Methods to instances of your class. In general you should subclass from Acquisition.Implicit unless you have a good reason not to. XXX: is this true? I thought that any ExtensionClass.Base can be acquired. The Implicit and Explicit just control how the class can acquire, not how it is acquired. 5.2.5.2. Globals.Persistent This base class makes instances of your product persistent. For more information on persistence and this class see Chapter 4. In order to make your poll class persistent you’ll need to make one change. Since _votes is a dictionary this means that it’s a mutable non-persistent sub-object. You’ll need to let the persistence machinery know when you change it: def castVote(self, index): """Votes for a choice""" self._votes[index] = self._votes[index] + 1 self._p_changed = 1 The last line of this method sets the _p_changed attribute to 1. This tells the persistence machinery that this object has changed and should be marked as dirty, meaning that its new state should be written to the database at the conclusion of the current transaction. A more detailed explanation is given in the Persistence chapter of this guide. 5.2.5.3. OFS.SimpleItem.Item This base class provides your product with the basics needed to work with the Zope management interface. By inheriting from Item your product class gains a whole host of features: the ability to be cut and pasted, capability with management views, WebDAV support, undo support, ownership support, and traversal controls. It also gives you some standard methods for management views and error display including manage_main(). You also get the getId(), title_or_id(), title_and_id() methods and the this() DTML utility method. Finally this class gives your product basic dtml-tree tag support. Item is really an everything-but-the-kitchen-sink kind of base class. Item requires that your class and instances have some management interface related attributes. meta_type– This attribute should be a short string which is the name of your product class as it appears in the product add list. For example, the poll product class could have a meta_typewith value as Poll. idor __name__– All Iteminstances must have an idstring attribute which uniquely identifies the instance within it’s container. As an alternative you may use __name__instead of id. title– All Iteminstances must have a titlestring attribute. A title may be an empty string if your instance does not have a title. In order to make your poll class work correctly as an Item you’ll need to make a few changes. You must add a meta_type class attribute, and you may wish to add an id parameter to the constructor: class PollProduct(..., Item): meta_type = 'Poll' ... def __init__(self, id, question, responses): self.id = id self._question = question self._responses = responses self._votes = {} for i in range(len(responses)): self._votes[i] = 0 Finally, you should probably place Item last in your list of base classes. The reason for this is that Item provides defaults that other classes such as ObjectManager and PropertyManager override. By placing other base classes before Item you allow them to override methods in Item. 5.2.5.4. AccessControl.Role.RoleManager This class provides your product with the ability to have its security policies controlled through the web. See Chapter 6 for more information on security policies and this class. 5.2.5.5. OFS.ObjectManager This base class gives your product the ability to contain other Item instances. In other words, it makes your product class like a Zope folder. This base class is optional. See the API Reference for more details. This base class gives you facilities for adding Zope objects, importing and exporting Zope objects and WebDAV. It also gives you the objectIds, objectValues, and objectItems methods. ObjectManager makes few requirements on classes that subclass it. You can choose to override some of its methods but there is little that you must do. If you wish to control which types of objects can be contained by instances of your product you can set the meta_types class attribute. This attribute should be a tuple of meta_types. This keeps other types of objects from being created in or pasted into instances of your product. The meta_types attribute is mostly useful when you are creating specialized container products. 5.2.5.6. OFS.PropertyManager This base class provides your product with the ability to have user-managed instance attributes. See the API Reference for more details. This base class is optional. Your class may specify that it has one or more predefined properties, by specifying a ‘_properties’ class attribute. For example: _properties=({'id':'title', 'type': 'string', 'mode': 'w'}, {'id':'color', 'type': 'string', 'mode': 'w'}, ) The _properties structure is a sequence of dictionaries, where each dictionary represents a predefined property. Note that if a predefined property is defined in the _properties structure, you must provide an attribute with that name in your class or instance that contains the default value of the predefined property. Each entry in the _properties structure must have at least an id and a type key. The id key contains the name of the property, and the type key contains a string representing the object’s type. The type string must be one of the values: float, int, long, string, lines, text, date, tokens, selection, or multiple section. For more information on Zope properties see the Zope Book. For selection and multiple selection properties, you must include an addition item in the property dictionary, select_variable which provides the name of a property or method which returns a list of strings from which the selection(s) can be chosen. For example: _properties=({'id' : 'favorite_color', 'type' : 'selection', 'select_variable' : 'getColors' }, ) Each entry in the _properties structure may optionally provide a mode key, which specifies the mutability of the property. The mode string, if present, must be w, d, or wd. A w present in the mode string indicates that the value of the property may be changed by the user. A d indicates that the user can delete the property. An empty mode string indicates that the property and its value may be shown in property listings, but that it is read-only and may not be deleted. Entries in the _properties structure which do not have a mode item are assumed to have the mode wd (writable and deleteable). 5.2.6. Security Declarations In addition to inheriting from a number of standard base classes, you must declare security information in order to turn your component into a product. See Chapter 6 for more information on security and instructions for declaring security on your components. Here’s an example of how to declare security on the poll class: from AccessControl import ClassSecurityInfo class PollProduct(...): ... security = ClassSecurityInfo() security.declareProtected('Use Poll', 'castVote') def castVote(self, index): ... security.declareProtected('View Poll results', 'getTotalVotes') def getTotalVotes(self): ... security.declareProtected('View Poll results', 'getVotesFor') def getVotesFor(self, index): ... security.declarePublic('getResponses') def getResponses(self): ... security.declarePublic('getQuestion') def getQuestion(self): ... For security declarations to be set up Zope requires that you initialize your product class. Here’s how to initialize your poll class: from Globals import InitializeClass class PollProduct(...): ... InitializeClass(PollProduct) 5.2.7. Summary Congratulations, you’ve created a product class. Here it is in all its glory (see examples/PollProduct.py): from Poll import Poll from AccessControl import ClassSecurityInfo from Globals import InitializeClass from Acquisition import Implicit from Globals import Persistent from AccessControl.Role import RoleManager from OFS.SimpleItem import Item class PollProduct(Implicit, Persistent, RoleManager, Item): """Poll product class, implements Poll interface. The poll has a question and a sequence of responses. Votes are stored in a dictionary which maps response indexes to a number of votes. """ implements(IPoll) meta_type = 'Poll' security = ClassSecurityInfo() def __init__(self, id, question, responses): self.id = id self._question = question self._responses = responses self._votes = {} for i in range(len(responses)): self._votes[i] = 0 security.declareProtected('Use Poll', 'castVote') def castVote(self, index): "Votes for a choice" self._votes[index] = self._votes[index] + 1 self._p_changed = 1 security.declareProtected('View Poll results', 'getTotalVotes') def getTotalVotes(self): "Returns total number of votes cast" total = 0 for v in self._votes.values(): total = total + v return total security.declareProtected('View Poll results', 'getVotesFor') def getVotesFor(self, index): "Returns number of votes cast for a given response" return self._votes[index] security.declarePublic('getResponses') def getResponses(self): "Returns the sequence of responses" return tuple(self._responses) security.declarePublic('getQuestion') def getQuestion(self): "Returns the question" return self._question InitializeClass(Poll) Now it’s time to test your product class in Zope. To do this you must register your product class with Zope. 5.3. Registering Products Products are Python packages that live in ‘lib/python/Products’. Products are loaded into Zope when Zope starts up. This process is called product initialization. During product initialization, each product is given a chance to register its capabilities with Zope. 5.3.1. Product Initialization When Zope starts up it imports each product and calls the product’s ‘initialize’ function passing it a registrar object. The ‘initialize’ function uses the registrar to tell Zope about its capabilities. Here is an example ‘__init__.py’ file: from PollProduct import PollProduct, addForm, addFunction def initialize(registrar): registrar.registerClass( PollProduct, constructors=(addForm, addFunction), ) This function makes one call to the registrar object which registers a class as an addable object. The registrar figures out the name to put in the product add list by looking at the ‘meta_type’ of the class. Zope also deduces a permission based on the class’s meta-type, in this case Add Polls (Zope automatically pluralizes “Poll” by adding an “s”). The ‘constructors’ argument is a tuple of objects consisting of two functions: an add form which is called when a user selects the object from the product add list, and the add method which is the method called by the add form. Note that these functions are protected by the constructor permission. Note that you cannot restrict which types of containers can contain instances of your classes. In other words, when you register a class, it will appear in the product add list in folders if the user has the constructor permission. See the API Reference for more information on the ProductRegistrar interface. 5.3.2. Factories and Constructors Factories allow you to create Zope objects that can be added to folders and other object managers. Factories are discussed in Chapter 12 of the Zope Book. The basic work a factory does is to put a name into the product add list and associate a permission and an action with that name. If you have the required permission then the name will appear in the product add list, and when you select the name from the product add list, the action method will be called. Products use Zope factory capabilities to allow instances of product classes to be created with the product add list. In the above example of product initialization you saw how a factory is created by the product registrar. Now let’s see how to create the add form and the add list. The add form is a function that returns an HTML form that allows a users to create an instance of your product class. Typically this form collects that id and title of the instance along with other relevant data. Here’s a very simple add form function for the poll class: def addForm(): """Returns an HTML form.""" return """<html> <head><title>Add Poll</title></head> <body> <form action="addFunction"> id <input type="type" name="id"><br> question <input type="type" name="question"><br> responses (one per line) <textarea name="responses:lines"></textarea> </form> </body> </html>""" Notice how the action of the form is addFunction. Also notice how the lines of the response are marshalled into a sequence. See Chapter 2 for more information about argument marshalling and object publishing. It’s also important to include a HTML head tag in the add form. This is necessary so that Zope can set the base URL to make sure that the relative link to the addFunction works correctly. The add function will be passed a FactoryDispatcher as its first argument which proxies the location (usually a Folder) where your product was added. The add function may also be passed any form variables which are present in your add form according to normal object publishing rules. Here’s an add function for your poll class: def addFunction(dispatcher, id, question, responses): """Create a new poll and add it to myself """ p = PollProduct(id, question, responses) dispatcher.Destination()._setObject(id, p) The dispatcher has three methods: Destination– The ObjectManagerwhere your product was added. DestinationURL– The URL of the ObjectManagerwhere your product was added. manage_main– Redirects to a management view of the ObjectManagerwhere your product was added. Notice how it calls the _setObject() method of the destination ObjectManager class to add the poll to the folder. See the API Reference for more information on the ObjectManager interface. The add function should also check the validity of its input. For example the add function should complain if the question or response arguments are not of the correct type. Finally you should recognize that the constructor functions are not methods on your product class. In fact they are called before any instances of your product class are created. The constructor functions are published on the web so they need to have doc strings, and are protected by a permission defined in during product initialization. 5.3.3. Testing Now you’re ready to register your product with Zope. You need to add the add form and add method to the poll module. Then you should create a Poll directory in your lib/python/Products directory and add the Poll.py, PollProduct.py, and __init__.py files. Then restart Zope. Now login to Zope as a manager and visit the web management interface. You should see a ‘Poll’ product listed inside the Products folder in the Control_Panel. If Zope had trouble initializing your product you will see a traceback here. Fix your problems, if any and restart Zope. If you are tired of all this restarting, take a look at the Refresh facility covered in Chapter 7. Now go to the root folder. Select Poll from the product add list. Notice how you are taken to the add form. Provide an id, a question, and a list of responses and click Add. Notice how you get a black screen. This is because your add method does not return anything. Notice also that your poll has a broken icon, and only has the management views. Don’t worry about these problems now, you’ll find out how to fix these problems in the next section. Now you should build some DTML Methods and Python Scripts to test your poll instance. Here’s a Python Script to figure out voting percentages: ## Script (Python) "getPercentFor" ##parameters=index ## """Returns the percentage of the vote given a response index. Note, this script should be bound a poll by acquisition context.""" poll = context return float(poll.getVotesFor(index)) / poll.getTotalVotes() Here’s a DTML Method that displays poll results and allows you to vote: <dtml-var standard_html_header> <h2> <dtml-var getQuestion> </h2> <form> <!-- calls this dtml method --> <dtml-in getResponses> <p> <input type="radio" name="index" value="&dtml-sequence-index;"> <dtml-var sequence-item> </p> </dtml-in> <input type="submit" value=" Vote "> </form> <!-- process form --> <dtml-if index> <dtml-call </dtml-if> <!-- display results --> <h2>Results</h2> <p><dtml-var getTotalVotes> votes cast</p> <dtml-in getResponses> <p> <dtml-var sequence-item> - <dtml-var% </p> </dtml-in> <dtml-var standard_html_footer> To use this DTML Method, call it on your poll instance. Notice how this DTML makes calls to both your poll instance and the getPercentFor Python script. At this point there’s quite a bit of testing and refinement that you can do. Your main annoyance will be having to restart Zope each time you make a change to your product class (but see Chapter 9 for information on how to avoid all this restarting). If you vastly change your class you may break existing poll instances, and will need to delete them and create new ones. See Chapter 9 for more information on debugging techniques which will come in handy. 5.3.4. Building Management Interfaces Now that you have a working product let’s see how to beef up its user interface and create online management facilities. 5.3.5. Defining Management Views All Zope products can be managed through the web. Products have a collection of management tabs or views which allow managers to control different aspects of the product. A product’s management views are defined in the manage_options class attribute. Here’s an example: manage_options=( {'label' : 'Edit', 'action' : 'editMethod'}, {'label' : 'View', 'action' : 'viewMethod'}, ) The manage_options structure is a tuple that contains dictionaries. Each dictionary defines a management view. The view dictionary can have a number of items. ‘label’ – This is the name of the management view ‘action’ – This is the URL that is called when the view is chosen. Normally this is the name of a method that displays a management view. ‘target’ – An optional target frame to display the action. This item is rarely needed. ‘help’ – Optional help information associated with the view. You’ll find out more about this option later. Management views are displayed in the order they are defined. However, only those management views for which the current user has permissions are displayed. This means that different users may see different management views when managing your product. Normally you will define a couple custom views and reusing some existing views that are defined in your base classes. Here’s an example: class PollProduct(..., Item): ... manage_options=( {'label' : 'Edit', 'action' : 'editMethod'}, {'label' : 'Options', 'action' : 'optionsMethod'}, ) + RoleManager.manage_options + Item.manage_options This example would include the standard management view defined by RoleManager which is Security and those defined by Item which are Undo and Ownership. You should include these standard management views unless you have good reason not to. If your class has a default view method ( index_html) you should also include a View view whose action is an empty string. See Chapter 2 for more information on index_html. Note: you should not make the View view the first view on your class. The reason is that the first management view is displayed when you click on an object in the Zope management interface. If the View view is displayed first, users will be unable to navigate to the other management views since the view tabs will not be visible. 5.3.6. Creating Management Views The normal way to create management view methods is to use DTML. You can use the DTMLFile class to create a DTML Method from a file. For example: from Globals import DTMLFile class PollProduct(...): ... editForm = DTMLFile('dtml/edit', globals()) ... This creates a DTML Method on your class which is defined in the dtml/edit.dtml file. Notice that you do not have to include the .dtml file extension. Also, don’t worry about the forward slash as a path separator; this convention will work fine on Windows. By convention DTML files are placed in a dtml subdirectory of your product. The globals() argument to the DTMLFile constructor allows it to locate your product directory. If you are running Zope in debug mode then changes to DTML files are reflected right away. In other words you can change the DTML of your product’s views without restarting Zope to see the changes. DTML class methods are callable directly from the web, just like other methods. So now users can see your edit form by calling the editForm method on instances of your poll class. Typically DTML methods will make calls back to your instance to gather information to display. Alternatively you may decide to wrap your DTML methods with normal methods. This allows you to calculate information needed by your DTML before you call it. This arrangement also ensures that users always access your DTML through your wrapper. Here’s an example: from Globals import DTMLFile class PollProduct(...): ... _editForm = DTMLFile('dtml/edit', globals()) def editForm(self, ...): ... return self._editForm(REQUEST, ...) When creating management views you should include the DTML variables manage_page_header and manage_tabs at the top, and manage_page_footer at the bottom. These variables are acquired by your product and draw a standard management view header, tabs widgets, and footer. The management header also includes CSS information which you can take advantage of. You can use any of the styles Bootstrap 4 provides - see Here’s an example management view for your poll class. It allows you to edit the poll question and responses (see editPollForm.dtml): <dtml-var manage_page_header> <dtml-var manage_tabs> <p class="form-help"> This form allows you to change the poll's question and responses. <b>Changing a poll's question and responses will reset the poll's vote tally.</b>. </p> <form action="editPoll"> <table> <tr valign="top"> <th class="form-label">Question</th> <td><input type="text" name="question" class="form-element" value="&dtml-getQuestion;"></td> </tr> <tr valign="top"> <th class="form-label">Responses</th> <td><textarea name="responses:lines" cols="50" rows="10"> <dtml-in getResponses> <dtml-var sequence-item html_quote> </dtml-in> </textarea> </td> </tr> <tr> <td></td> <td><input type="submit" value="Change" class="form-element"></td> </tr> </table> </form> <dtml-var manage_page_header> This DTML method displays an edit form that allows you to change the questions and responses of your poll. Notice how poll properties are HTML quoted either by using html_quote in the dtml-var tag, or by using the dtml-var entity syntax. Assuming this DTML is stored in a file editPollForm.dtml in your product’s dtml directory, here’s how to define this method on your class: class PollProduct(...): ... security.declareProtected('View management screens', 'editPollForm') editPollForm = DTML('dtml/editPollForm', globals()) Notice how the edit form is protected by the View management screens permission. This ensures that only managers will be able to call this method. Notice also that the action of this form is editPoll. Since the poll as it stands doesn’t include any edit methods you must define one to accept the changes. Here’s an editPoll method: class PollProduct(...): ... def __init__(self, id, question, responses): self.id = id self.editPoll(question, response) ... security.declareProtected('Change Poll', 'editPoll') def editPoll(self, question, responses): """ Changes the question and responses. """ self._question = question self._responses = responses self._votes = {} for i in range(len(responses)): self._votes[i] = 0 Notice how the __init__ method has been refactored to use the new editPoll method. Also notice how the editPoll method is protected by a new permissions, Change Poll. There still is a problem with the editPoll method. When you call it from the editPollForm through the web nothing is returned. This is a bad management interface. You want this method to return an HTML response when called from the web, but you do not want it to do this when it is called from __init__. Here’s the solution: class Poll(...): ... def editPoll(self, question, responses, REQUEST=None): """Changes the question and responses.""" self._question = question self._responses = responses self._votes = {} for i in range(len(responses)): self._votes[i] = 0 if REQUEST is not None: return self.editPollForm(REQUEST, manage_tabs_message='Poll question and responses changed.') If this method is called from the web, then Zope will automatically supply the REQUEST parameter. (See chapter 4 for more information on object publishing). By testing the REQUEST you can find out if your method was called from the web or not. If you were called from the web you return the edit form again. A management interface convention that you should use is the manage_tab_message DTML variable. If you set this variable when calling a management view, it displays a status message at the top of the page. You should use this to provide feedback to users indicating that their actions have been taken when it is not obvious. For example, if you don’t return a status message from your editPoll method, users may be confused and may not realize that their changes have been made. Sometimes when displaying management views, the wrong tab will be highlighted. This is because ‘manage_tabs’ can’t figure out from the URL which view should be highlighted. The solution is to set the ‘management_view’ variable to the label of the view that should be highlighted. Here’s an example, using the ‘editPoll’ method: def editPoll(self, question, responses, REQUEST=None): """ Changes the question and responses. """ self._question = question self._responses = responses self._votes = {} for i in range(len(responses)): self._votes[i] = 0 if REQUEST is not None: return self.editPollForm(REQUEST, management_view='Edit', manage_tabs_message='Poll question and responses changed.') Now let’s take a look a how to define an icon for your product. 5.3.7. Icons Zope products are identified in the management interface with icons. An icon should be a 16 by 16 pixel GIF image with a transparent background. Normally icons files are located in a www subdirectory of your product package. To associate an icon with a product class, use the icon parameter to the registerClass method in your product’s constructor. For example: def initialize(registrar): registrar.registerClass( PollProduct, constructors=(addForm, addFunction), icon='www/poll.gif' ) Notice how in this example, the icon is identified as being within the product’s www subdirectory. See the API Reference for more information on the registerClass method of the ProductRegistrar interface. 5.3.8. Online Help Zope has an online help system that you can use to provide help for your products. Its main features are context-sensitive help and API help. You should provide both for your product. 5.3.9. Context Sensitive Help To create context sensitive help, create one help file per management view in your product’s help directory. You have a choice of formats including: HTML, DTML, structured text, GIF, JPG, and PNG. Register your help files at product initialization with the registerHelp() method on the registrar object: def initialize(registrar): ... registrar.registerHelp() This method will take care of locating your help files and creating help topics for each help file. It can recognize these file extensions: .html, .htm, .dtml, .txt, .stx, .gif, .jpg, .png. If you want more control over how your help topics are created you can use the registerHelpTopic() method which takes an id and a help topic object as arguments. For example: from mySpecialHelpTopics import MyTopic def initialize(context): ... context.registerHelpTopic('myTopic', MyTopic()) Your help topic should adhere to the ‘HelpTopic’ interface. See the API Reference for more details. The chief way to bind a help topic to a management screen is to include information about the help topic in the class’s manage_options structure. For example: manage_options = ( {'label': 'Edit', 'action': 'editMethod', 'help': ('productId','topicId')}, ) The help value should be a tuple with the name of your product’s Python package, and the file name (or other id) of your help topic. Given this information, Zope will automatically draw a Help button on your management screen and link it to your help topic. To draw a help button on a management screen that is not a view (such as an add form), use the ‘HelpButton’ method of the ‘HelpSys’ object like so: <dtml-var "HelpSys.HelpButton('productId', 'topicId')"> This will draw a help button linked to the specified help topic. If you prefer to draw your own help button you can use the helpURL method instead like so: <dtml-var "HelpSys.helpURL( topic='productId', product='topicId')"> This will give you a URL to the help topic. You can choose to draw whatever sort of button or link you wish. 5.3.10. Other User Interfaces In addition to providing a through the web management interface your products may also support many other user interfaces. You product might have no web management interfaces, and might be controlled completely through some other network protocol. Zope provides interfaces and support for WebDAV and XML-RPC. If this isn’t enough you can add other protocols. 5.3.11. WebDAV Interfaces WebDAV treats Zope objects like files and directories. See Chapter 3 for more information on WebDAV. By simply sub-classing from ‘SimpleItem.Item’ and ‘ObjectManager’ if necessary, you gain basic WebDAV support. Without any work your objects will appear in directory listings and if your class is an ‘ObjectManager’ its contents will be accessible via WebDAV. See Chapter 2 for more information on implementing WebDAV support. 5.3.12. XML-RPC and Network Services XML-RPC is covered in Chapter 2. All your product’s methods can be accessible via XML-RPC. However, if your are implementing network services, you should explicitly plan one or more methods for use with XML-RPC. Since XML-RPC allows marshalling of simple strings, lists, and dictionaries, your XML-RPC methods should only accept and return these types. These methods should never accept or return Zope objects. XML-RPC also does not support ‘None’ so you should use zero or something else in place of ‘None’. Another issue to consider when using XML-RPC is security. Many XML-RPC clients still don’t support HTTP basic authorization. Depending on which XML-RPC clients you anticipate, you may wish to make your XML-RPC methods public and accept authentication credentials as arguments to your methods. 5.3.13. Packaging Products Zope products are normally packaged as tarballs. You should create your product tarball in such a way as to allow it to be unpacked in the Products directory. For example, cd to the Products directory and then issue a tar comand like so: $ tar zcvf MyProduct-1.0.1.tgz MyProduct This will create a gzipped tar archive containing your product. You should include your product name and version number in file name of the archive. See the Poll-1.0.tgz file for an example of a fully packaged Python product. 5.3.14. Product Information Files Along with your Python and ZPT files you should include some information about your product in its root directory. - README.txt – Provides basic information about your product. Zope will parse this file as StructuredText and make it available on the README view of your product in the control panel. VERSION.txt – Contains the name and version of your product on a single line. For example, ‘Multiple Choice Poll 1.1.0’. Zope will display this information as the ‘version’ property of your product in the control panel. LICENSE.txt – Contains your product license, or a link to it. You may also wish to provide additional information. Here are some suggested optional files to include with your product. INSTALL.txt – Provides special instructions for installing the product and components on which it depends. This file is only optional if your product does not require more than an ungzip/untar into a Zope installation to work. TODO.txt – This file should make clear where this product release needs work, and what the product author intends to do about it. CHANGES.txt and HISTORY.txt – ‘CHANGES.txt’ should enumerate changes made in particular product versions from the last release of the product. Optionally, a ‘HISTORY.txt’ file can be used for older changes, while ‘CHANGES.txt’ lists only recent changes. DEPENDENCIES.txt – Lists dependencies including required os platform, required Python version, required Zope version, required Python packages, and required Zope products. 5.3.15. Product Directory Layout By convention your product will contain a number of sub-directories. Some of these directories have already been discussed in this chapter. Here is a summary of them. www – Contains your icon & ZPT files. help – Contains your help files. tests – Contains your unit tests. It is not necessary to include these directories if your don’t have anything to go in them. 5.4. Evolving Products As you develop your product classes you will generally make a series of product releases. While you don’t know in advance how your product will change, when it does change there are measures that you can take to minimize problems. 5.4.1. Evolving Classes Issues can occur when you change your product class because instances of these classes are generally persistent. This means that instances created with an old class will start using a new class. If your class changes drastically this can break existing instances. The simplest way to handle this situation is to provide class attributes as defaults for newly added attributes. For example if the latest version of your class expects an ‘improved_spam’ instance attribute while earlier versions only sported ‘spam’ attributes, you may wish to define an ‘improved_spam’ class attribute in your new class so your old objects won’t break when they run with your new class. You might set ‘improved_spam’ to None in your class, and in methods where you use this attribute you may have to take into account that it may be None. For example: class Sandwich(...): improved_spam = None ... def assembleSandwichMeats(self): ... # test for old sandwich instances if self.improved_spam is None: self.updateToNewSpam() ... Another solution is to use the standard Python pickling hook ‘__setstate__’, however, this is in general more error prone and complex. A third option is to create a method to update old instances. Then you can manually call this method on instances to update to them. Note, this won’t work unless the instances function well enough to be accessible via the Zope management screens. While you are developing a product you won’t have to worry too much about these details, since you can always delete old instances that break with new class definitions. However, once you release your product and other people start using it, then you need to start planning for the eventuality of upgrading. Another nasty problem that can occur is breakage caused by renaming your product classes. You should avoid this since it breaks all existing instances. If you really must change your class name, provide aliases to it using the old name. You may however, change your class’s base classes without causing these kinds of problems. 5.4.2. Evolving Interfaces The basic rule of evolving interfaces is don’t do it. While you are working privately you can change your interfaces all you wish. But as soon as you make your interfaces public you should freeze them. The reason is that it is not fair to users of your interfaces to changes them after the fact. An interface is contract. It specifies how to use a component and it specifies how to implement types of components. Both users and developers will have problems if your change the interfaces they are using or implementing. The general solution is to create simple interfaces in the first place, and create new ones when you need to change an existing interface. If your new interfaces are compatible with your existing interfaces you can indicate this by making your new interfaces extend your old ones. If your new interface replaces an old one but does not extend it you should give it a new name such as, WidgetWithBellsOn. Your components should continue to support the old interface in addition to the new one for a few releases. 5.5. Conclusion Migrating your components into fully fledged Zope products is a process with a number of steps. There are many details to keep track of. However, if you follow the recipe laid out in this chapter you should have no problems. Zope products are a powerful framework for building web applications. By creating products you can take advantage of Zope’s features including security, scalability, through the web management, and collaboration.
https://zope.readthedocs.io/en/latest/zdgbook/Products.html
CC-MAIN-2022-21
refinedweb
6,843
57.47
I searched many times on the Internet for an easy way to implement an AppBar (desktop application toolbar) but without success. Microsoft provides some WinAPI shell functions like SHAppBarMessage (read here about it) and an old sample application (AppBar.exe). I tried it out but that was not what I expected because Shell AppBars reorganize the desktop to fit the toolbar, while I wanted a sliding bar that doesn't disturb other application windows or the desktop icons. SHAppBarMessage So, I have developed a single MFC object, CAppBarMngr, to allow almost any application to become a sliding AppBar, with minimal changes to the application's source code. Since it is necessary to respond to mouse movements outside the application window, I had to implement a global mouse hook, which requires to generate a DLL, but you don't have to deal with the DLL internals, just to distribute with your software. CAppBarMngr There is just one class: CAppBarMngr, which will be responsible for sliding the application's main frame from the left or right edge of the desktop screen. As I mentioned earlier, a global hook is needed to respond appropriately to the mouse cursor position (i.e. if the side edge has been reached). Traditionally, a global hook is implemented as a DLL that sends messages to an application's window through its HWND handler (hook DLLs can't be MFC enabled), which requires several modifications in the application's code for receiving and managing the hook messages. In order to avoid that, however, I have tried a different approach: to send messages to a secondary thread, as explained below. HWND This class is derived from CWinThread, so it is capable of receiving messages sent using WinAPI PostThreadMessage() function, by implementing the PreTranslateMessage() event. Also this class has some other responsibilities: to be a wrapper for the hook DLL, and to handle window movements (sliding). CWinThread PostThreadMessage() PreTranslateMessage() CAppBarMngr has just one public member: the Init() function. It will be used to link the manager with the managed window. It receives three arguments as described in the source code: public Init() //------------------------------------------------------ // Function: Init - Loads DLL functions and initilize mouse hook // Arguments: _hWnd - Handler of window to manage // _width - Desired width of managed window // _left - True if window is left side docked, false if right sided // Returns: APPBARHOOK_DLLERROR - An error has occurred while loading DLL functions // APPBARHOOK_ALREADYHOOKED - Another instance has already hooked the mouse // APPBARHOOK_SUCCESS - All is OK //--------------------------------------------------------------------- int CAppBarMngr::Init(HWND _hWnd, int _width, bool _left) This is all you have to do to implement AppBar into your application: #include "AppBarMngr.h" yourapp::InitInstance() ::AfxBeginThread() Init Here is the code portion from the demo application: BOOL CAppBarDemoApp::InitInstance() { CMainFrame* pFrame = new CMainFrame; m_pMainWnd = pFrame; // creates a simple frame, without caption, icon or menu pFrame->Create(NULL, "AppBarDemo", WS_POPUP); // avoid taskbar button to appear, also removes 3D edge pFrame->ModifyStyleEx(WS_EX_APPWINDOW|WS_EX_CLIENTEDGE, WS_EX_TOOLWINDOW); // Don't show, AppBar Manager class will do pFrame->ShowWindow(SW_HIDE); pFrame->UpdateWindow(); // Create AppBar manager thread CAppBarMngr *appbar = (CAppBarMngr *)::AfxBeginThread(RUNTIME_CLASS(CAppBarMngr)); // Init AppBar Manager, right sided int result = appbar->Init(pFrame->m_hWnd, 150, false); // use true for left sided // Check if hooking has been successful if (result==APPBARHOOK_SUCCESS) return TRUE; else if (result==APPBARHOOK_DLLERROR) ::AfxMessageBox("Error loading AppBarHook.dll"); // else should be APPBARHOOK_ALREADYHOOKED, close application return FALSE; } That's all you have to do. Also, don't forget to distribute the hook DLL (AppBarHook.dll) with your executable application; it must reside in the same directory to work properly. As it makes no sense to run two copies of the same AppBar program, the Init() function is a notification that the hook has already been used (by returning APPBARHOOK_ALREADYHOOKED), so, you can use it to avoid a second instance to run. Notice the last lines in the example above, if hook has been already used, then it returns FALSE to close the application. APPBARHOOK_ALREADYHOOKED FALSE I had to develop a project to implement the global mouse hook DLL. It has just one file: AppBarHook.cpp. I don't want to make a hook tutorial here because there are several great articles here at CodeProject. So, I will just give you some of the details. The usual hook technique saves a windows handler (HWND) of the receiving window for hook messages. I have used a Thread ID instead, that's why CAppBarMngr is a thread object. So, messages are passed using WinAPI ::SendThreadMessage() instead of the ::SendMessage() function. ::SendThreadMessage() ::SendMessage() The hook will detect mouse events of possible interest to CAppBarMngr. I say possible because the hook doesn't know about the window state and position, it knows only about the managed edge and window width. The MFC class will do the rest of the work. The Hook DLL exports only one function: SetHook(), which creates the global mouse hook and saves the desired width and edge. It is called by CAppBarMngr. SetHook() I have created a simple demo application project for testing purpose. It has a simple CFrameWnd derived object without frame, caption, menu or border. I have tested this with other standard MFC windows without any problem. CFrameWnd The demo source is a Visual C++ 6.0 project, but you will be able to open and automatically convert to a newer Visual C++ version. If you find any bug in this application, please don't care about it. It is just for demo purposes and it is not the intention of this article. I will not describe its internals here for the same reason. I have been requested several times for a .NET version of this control. I have started working on it. I will leave a message in the forum below when it is.
http://www.codeproject.com/Articles/10271/AppBar-How-to-implement-a-sliding-desktop-bar-with?fid=177765&df=90&mpp=10&sort=Position&spc=None&tid=3552833
CC-MAIN-2015-11
refinedweb
951
50.97
11 August 2010 07:12 [Source: ICIS news] By Prema Viswanathan SINGAPORE (ICIS)--India’s polyolefins markets have rebounded strongly, with prices surging by nearly $120/tonne (€91/tonne) week on week, propelled by tight supply, rising feedstock naphtha and olefins costs and a price hike in the key China market, industry sources said on Wednesday. The trend is likely to continue, with the start of the peak pre-festival demand season in September, they added. Prices of raffia grade polypropylene (PP) were at $1,240-1,280/tonne CFR (cost and freight) ?xml:namespace> High density polyethylene (HDPE) rose $120/tonne to $1,180-1,220/tonne CFR India during the same period. (please see charts below for more details) “Imports of PP and PE into Availability was extremely scarce early this week, with several key suppliers having already sold out their August allocations, sources said. The supply constraints have been exacerbated with a recent outage at Reliance Industries’ Nagothane cracker complex, end users said. “Our inventories are very low and we are scrambling for cargoes, but import offers are few and local supply is tight,” said a PP converter. Imports of PP have dwindled due to limited supply from key exporting regions such as the The anti-dumping duties imposed by the Indian government had led to fears that Saudi exporters may be hit by additional duties, which had further reduced PP imports, they added. “Options for selling to Indian domestic converters are narrowing as no buyer or seller wants to take a risk with the bogey of duties hanging over their heads,” said a source close to a Saudi exporter.The ongoing probe into PP imports from The imposition of fresh sanctions against “Iranian exports to India have declined somewhat in the past few weeks as suppliers and buyers grapple with the issue of banking restrictions imposed on Iranian products and consequent delays due to diversion of cargoes through Dubai port,” said a source close to an Iranian supplier. Iranian PE and PP were therefore priced around $30-40/tonne lower than product from other destinations, he added. Persistently high crude values at above $80/bbl and the hike in domestic polymer prices by Indian rupees (Rs) 2-3/kg (Rs2000-3000/tonne) ($43-65/tonne) last week had also boosted sentiment in the Indian market. “There is a feeling now that prices will continue to rise in the next few weeks, so processors who had maintained low inventories in the past few months due to the high price environment are now desperate to stock up,” said a trader. Even the ongoing monsoon season failed to dampen buying sentiment, especially for PP and low density PE (LDPE) which faced the most intense supply constraints. “Our export orders for finished goods have begun to pick up again, but we are not in a position to meet our commitments due to restricted raw material supply,” said a PP converter. Demand from the flexible and rigid packaging segments has been quite good lately, said an LDPE processor. “We expect demand to continue growing in the high double digits in the next few months, especially for multi-layer film used in milk pouches and edible oils packaging,” he said. ($1 = €0.74, $1= Rs46.33) For more information on polyolefins,
http://www.icis.com/Articles/2010/08/11/9383902/india-pe-pp-prices-rise-as-market-rebounds-on-tight-supply.html
CC-MAIN-2014-42
refinedweb
549
52.23
>>>> How?>>>>>>> Sure, I'll write a C version and try to reproduce the warning.>>>>> Unfortunately, the C equivalent can't reproduce the warning, I've run the>> test for the whole night. :( While using the script, often I can trigger>> the warning in several mins.> > Ho-hum... I wonder if we are hitting cgroup_clone() in all that fun...I don't think so, I think cgroup_clone() will be called only if namespace isused, like clone(CLONE_NEWNS). Even if cgroup_clone() gets called, it willreturn before doing any vfs work unless the ns_cgroup subsystem is mounted.int cgroup_clone(struct task_struct *tsk, struct cgroup_subsys *subsys, char *nodename){ ... mutex_lock(&cgroup_mutex); again: root = subsys->root; if (root == &rootnode) { <--- here mutex_unlock(&cgroup_mutex); return 0; }> Could you> a) add a printk to that sucker> b) independently from (a), see if wrapping these syscalls into> pid = fork();> if (!pid) {> [make a syscall, print something]> exit(0);> } else if (pid > 0) {> waitpid(pid, NULL, 0);> }> and see what happens...> >
https://lkml.org/lkml/2009/2/12/19
CC-MAIN-2015-32
refinedweb
160
63.49
I receive every month or so, a free magazine from VSJ. The articles in the magazine focus on "visual" languages such as Java, VB.NET, etc. Along with the articles, there is an "Outlook" section that discusses some recent developments or something similar, and also book reviews as well as the usual advertising. In the Feb 5th edition, there was an article entitled "Script it with C#" (you can read the original here). This article is a .NET 2.0 version inspired by the original VSJ article but duly rewritten from the ground up. I contemplated changing the existing article to support the .NET 2.0 Framework back when it was still in Beta1, however when I tried this, it ran into a plethora of errors, and due to the time constraints, I dropped it. This evening, I decided that it would be nice if it did work under .NET 2.0 (just because of creating support for generics ) and so I did it. Included inside the zip file are the source files (or should I say file) along with a VC# Express project to compile it with. However, because it's just a single file, you should be able to compile it quite easily using csc.exe. There are two routines inside the class, Main and RunScript. Main RunScript Main, as usual, takes care of loading the program. In this case, all it does is check that a file was passed as the first argument, and then runs it passing the remaining arguments to the script. I decided to leave the first argument in the array so that the script has an easy way of determining where on the hard drive it is located. I.e. arg[0]=scripfile.csx, arg[1...]=.... arg[0]=scripfile.csx arg[1...]=... private static void Main(string[] args) { // If no arguments then display error if ((args.Length == 0) || (!File.Exists(args[0]))) { MessageBox.Show("A script file must be provided" + " on the command-line.", "No Script File", MessageBoxButtons.OK, MessageBoxIcon.Error); } else { // Run the script (we'll leave the first argument so // that the script knows where it is :) CSScriptCompiler.RunScript(args[0], args); } } RunScript takes care of all the hard work. Basically, the following steps occur: #using #import #target using This is analogous to the C# using statement, it simply imports a namespace so that you can use shorthand versions of the class names. I decided to prefix the statement with a # because it is a bit simpler to avoid clashes with the actual C# statement. # A possibility that crossed my mind was to use regular expressions to increase the robustness of parsing, however, I decided to opt out because it was easier and quicker to accomplish. I never got on well with Dino's system of using a separate file to reference the DLLs, the task is made much easier if all the code is contained within one source file and that's why I introduced the #import directive. This basically references an assembly so that you can use it in the script; the filename can be relative, absolute, or if it's on the paths searched by csc, then it can just be a filename. This is like the C #include statement. All that really happens is this line is replaced with the contents of filename. You can of course have includes within includes within includes (insert recursion loop here ). #include filename Note: Remember to take this into account when debugging errors. I have not thought of an elegant solution which will keep track of the source files or line numbers. Along with the source / demo project is an installer created with NSIS which will register the .csx extension for the compiler so that when you run a script file, it automatically executes css.exe, the C# script compiler. It also associates an icon with the extension to identify the file-type. This installer will be compiled when you build the solution (if you have NSIS on your machine and installed in the default path). Here is a list of the main features of the application: If anybody comes up with any cool or useful scripts that could be included with the source, then send me an email through the CP website and if they're good enough I'll include them in the zip. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here Kendavis wrote:Will this work in a multi threaded environment ? Kendavis wrote:Can I launch a couple hundred threads all runing a different script ? leppie #include <filename> Ed.Poore wrote:Basically replace #include with the contents of filename? leppie wrote:Exactly You could even use the VC preprocessor to handle 'recursive' imports/includes. #ifndef _HEADERFILE #define _HEADERFILE ... #endif #define Ed.Poore wrote:Is that correct? Ed.Poore wrote:Now your just getting complicated Tras_b wrote:So, I don't see much difference. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/12764/C-Script-for-NET?msg=1664892
CC-MAIN-2015-35
refinedweb
882
62.78
CSS Inheritance, The Cascade And Global Scope: Your New Old Worst Best Friends. - inheritance, - the cascade (the “C” in CSS). Despite these features enabling a DRY, efficient way to style web documents and despite them being the very reason CSS exists, they have fallen remarkably out of favor. From CSS methodologies such as BEM and Atomic CSS through to programmatically encapsulated CSS modules, many are doing their best to sidestep or otherwise suppress these features. This gives developers more control over their CSS, but only an autocratic sort of control based on frequent intervention. I’m going to revisit inheritance, the cascade and scope here with respect to modular interface design. I aim to show you how to leverage these features so that your CSS code becomes more concise and self-regulating, and your interface more easily extensible. Inheritance And font-family Despite protestations by many, CSS does not only provide a global scope. If it did, everything would look exactly the same. Instead, CSS has a global scope and a local scope. Just as in JavaScript, the local scope has access to the parent and global scope. In CSS, this facilitates inheritance. For instance, if I apply a font-family declaration to the root (read: global) html element, I can ensure that this rule applies to all ancestor elements within the document (with a few exceptions, to be addressed in the next section). html { font-family: sans-serif; } /* This rule is not needed ↷ p { font-family: sans-serif; } */ Just like in JavaScript, if I declare something within the local scope, it is not available to the global — or, indeed, any ancestral — scope, but it is available to the child scope (elements within p). In the next example, the line-height of 1.5 is not adopted by the html element. However, the a element inside the p does respect the line-height value. html { font-family: sans-serif; } p { line-height: 1.5; } /* This rule is not needed ↷ p a { line-height: 1.5; } */ The great thing about inheritance is that you can establish the basis for a consistent visual design with very little code. And these styles will even apply to HTML you have yet to write. Talk about future-proof! The Alternative There are other ways to apply common styles, of course. For example, I could create a .sans-serif class… .sans-serif { font-family: sans-serif; } … and apply it to any element that I feel should have that style: <p class="sans-serif">Lorem ipsum.</p> This affords me some control: I can pick and choose exactly which elements take this style and which don’t. Any opportunity for control is seductive, but there are clear issues. Not only do I have to manually apply the class to any element that should take it (which means knowing what the class is to begin with), but in this case I’ve effectively forgone the possibility of supporting dynamic content: Neither WYSIWYG editors nor Markdown parsers provide sans-serif classes to arbitrary p elements by default. That class=“sans-serif” is not such a distant relative of style=“font-family: sans-serif” — except that the former means adding code to both the style sheet and the HTML. Using inheritance, we can do less of one and none of the other. Instead of writing out classes for each font style, we can just apply any we want to the html element in one declaration: html { font-size: 125%; font-family: sans-serif; line-height: 1.5; color: #222; } The inherit Keyword Some types of properties are not inherited by default, and some elements do not inherit some properties. But you can use [property name]: inherit to force inheritance in some cases. For example, the input element doesn’t inherit any of the font properties in the previous example. Nor does textarea. In order to make sure all elements inherit these properties from the global scope, I can use the universal selector and the inherit keyword. This way, I get the most mileage from inheritance. * { font-family: inherit; line-height: inherit; color: inherit; } html { font-size: 125%; font-family: sans-serif; line-height: 1.5; color: #222; } Note that I’ve omitted font-size. I don’t want font-size to be inherited directly because it would override user-agent styles for heading elements, the small element and others. This way, I save a line of code and can defer to user-agent styles if I should want. Another property I would not want to inherit is font-style: I don’t want to unset the italicization of ems just to code it back in again. That would be wasted work and result in more code than I need. Now, everything either inherits or is forced to inherit the font styles I want them to. We’ve gone a long way to propagating a consistent brand, project-wide, with just two declaration blocks. From this point onwards, no developer has to even think about font-family, line-height or color while constructing components, unless they are making exceptions. This is where the cascade comes in. Exceptions-Based Styling I’ll probably want my main heading to adopt the same font-family, color and possibly line-height. That’s taken care of using inheritance. But I’ll want its font-size to differ. Because the user agent already provides an enlarged font-size for h1 elements (and it will be relative to the 125% base font size I’ve set), it’s possible I don’t need to do anything here. However, should I want to tweak the font size of any element, I can. I take advantage of the global scope and only tweak what I need to in the local scope. * { font-family: inherit; line-height: inherit; color: inherit; } html { font-size: 125%; font-family: sans-serif; line-height: 1.5; color: #222; } h1 { font-size: 3rem; } If the styles of CSS elements were encapsulated by default, this would not be possible: I’d have to add all of the font styles to h1 explicitly. Alternatively, I could divide my styles up into separate classes and apply each to the h1 as a space-separated value: <h1 class="Ff(sans) Fs(3) Lh(1point5) C(darkGrey)">Hello World</h1> Either way, it’s more work and a styled h1 would be the only outcome. Using the cascade, I’ve styled most elements the way I want them, with h1 just as a special case, just in one regard. The cascade works as a filter, meaning styles are only ever stated where they add something new. Element Styles We’ve made a good start, but to really leverage the cascade, we should be styling as many common elements as possible. Why? Because our compound components will be made of individual HTML elements, and a screen-reader-accessible interface makes the most of semantic markup. To put it another way, the style of “atoms” that make up your interface “molecules” (to use atomic design terminology) should be largely addressable using element selectors. Element selectors are low in specificity, so they won’t override any class-based styles you might incorporate later. The first thing you should do is style all of the elements that you know you’re going to need: a { … } p { … } h1, h2, h3 { … } input, textarea { … } /* etc */ The next part is crucial if you want a consistent interface without redundancy: Each time you come to creating a new component, if it introduces new elements, style those new elements with element selectors. Now is not the time to introduce restrictive, high-specificity selectors. Nor is there any need to compose a class. Semantic elements are what they are. For example, if I’ve yet to style button elements (as in the previous example) and my new component incorporates a button element, this is my opportunity to style button elements for the entire interface. button { padding: 0.75em; background: #008; color: #fff; } button:focus { outline: 0.25em solid #dd0; } Now, when you come to write a new component that also happens to incorporate buttons, that’s one less thing to worry about. You’re not rewriting the same CSS under a different namespace, and there’s no class name to remember or write either. CSS should always aim to be this effortless and efficient — it’s designed for it. Using element selectors has three main advantages: - The resulting HTML is less verbose (no redundant classes). - The resulting style sheet is less verbose (styles are shared between components, not rewritten per component). - The resulting styled interface is based on semantic HTML. The use of classes to exclusively provide styles is often defended as a “separation of concerns.” This is to misunderstand the W3C’s separation of concerns principle. The objective is to describe structure with HTML and style with CSS. Because classes are designated exclusively for styling purposes and they appear within the markup, you are technically breaking with separation wherever they’re used. You have to change the nature of the structure to elicit the style. Wherever you don’t rely on presentational markup (classes, inline styles), your CSS is compatible with generic structural and semantic conventions. This makes it trivial to extend content and functionality without it also becoming a styling task. It also makes your CSS more reusable across different projects where conventional semantic structures are employed (but where CSS ‘methodologies’ may differ). Special Cases Before anyone accuses me of being simplistic, I’m aware that not all buttons in your interface are going to do the same thing. I’m also aware that buttons that do different things should probably look different in some way. But that’s not to say we need to defer to classes, inheritance or the cascade. To make buttons found in one interface look fundamentally dissimilar is to confound your users. For the sake of accessibility and consistency, most buttons only need to differ in appearance by label. <button>create</button> <button>edit</button> <button>delete</button> Remember that style is not the only visual differentiator. Content also differentiates visually — and in a way that is much less ambiguous. You’re literally spelling out what different things are for. There are fewer instances than you might imagine where using style alone to differentiate content is necessary or appropriate. Usually, style differences should be supplemental, such as a red background or a pictographic icon accompanying a textual label. The presence of textual labels are of particular utility to those using voice-activation software: Saying “red button” or “button with cross icon” is not likely to elicit recognition by the software. I’ll cover the topic of adding nuances to otherwise similar looking elements in the “Utility Classes” section to follow. Attributes Semantic HTML isn’t just about elements. Attributes define types, properties and states. These too are important for accessibility, so they need to be in the HTML where applicable. And because they’re in the HTML, they provide additional opportunities for styling hooks. For example, the input element takes a type attribute, should you want to take advantage of it, and also attributes such as aria-invalid to describe state. input, textarea { border: 2px solid; padding: 0.5rem; } [aria-invalid] { border-color: #c00; padding-right: 1.5rem; background: url(images/cross.svg) no-repeat center 0.5em; } A few things to note here: - I don’t need to set color, font-familyor line-heighthere because these are inherited from html, thanks to my use of the inheritkeyword. If I want to change the main font-familyused application-wide, I only need to edit the one declaration in the htmlblock. - The border color is linked to color, so it too inherits the global color. All I need to declare is the border’s width and style. - The [aria-invalid]attribute selector is unqualified. This means it has better reach (it can be used with both my inputand textareaselectors) and it has minimal specificity. Simple attribute selectors have the same specificity as classes. Using them unqualified means that any classes written further down the cascade will override them as intended. The BEM methodology would solve this by applying a modifier class, such as input–invalid. But considering that the invalid state should only apply where it is communicated accessibly, input–invalid is necessarily redundant. In other words, the aria-invalid attribute has to be there, so what’s the point of the class? Just Write HTML My absolute favorite thing about making the most of element and attribute selectors high up in the cascade is this: The composition of new components becomes less a matter of knowing the company or organization’s naming conventions and more a matter of knowing HTML. Any developer versed in writing decent HTML who is assigned to the project will benefit from inheriting styling that’s already been put in place. This dramatically reduces the need to refer to documentation or write new CSS. For the most part, they can just write the (meta) language that they should know by rote. Tim Baxter also makes a case for this in Meaningful CSS: Style It Like You Mean It. Layout So far, we’ve not written any component-specific CSS, but that’s not to say we haven’t styled anything. All components are compositions of HTML elements. It’s largely in the order and arrangement of these elements that more complex components form their identity. Which brings us to layout. Principally, we need to deal with flow layout — the spacing of successive block elements. You may have noticed that I haven’t set any margins on any of my elements so far. That’s because margin should not be considered a property of elements but a property of the context of elements. That is, they should only come into play where elements meet. Fortunately, the adjacent sibling combinator can describe exactly this relationship. Harnessing the cascade, we can instate a uniform default across all block-level elements that appear in succession, with just a few exceptions. * { margin: 0; } * + * { margin-top: 1.5em; } body, br, li, dt, dd, th, td, option { margin-top: 0; } The use of the extremely low-specificity lobotomized owl selector ensures that any elements (except the common exceptions) are spaced by one line. This means that there is default white space in all cases, and developers writing component flow content will have a reasonable starting point. In most cases, margins now take care of themselves. But because of the low specificity, it’s easy to override this basic one-line spacing where needed. For example, I might want to close the gap between labels and their respective fields, to show they are paired. In the following example, any element that follows a label ( input, textarea, select, etc.) closes the gap. label { display: block } label + * { margin-top: 0.5rem; } Once again, using the cascade means only having to write specific styles where necessary. Everything else conforms to a sensible baseline. Note that, because margins only appear between elements, they don’t double up with any padding that may have been included for the container. That’s one more thing not to have to worry about or code defensively against. Also, note that you get the same spacing whether or not you decide to include wrapper elements. That is, you can do the following and achieve the same layout — it’s just that the margins emerge between the divs rather than between labels following inputs. <form> <div> <label for="one">Label one</label> <input id="one" name="one" type="text"> </div> <div> <label for="two">Label two</label> <input id="two" name="two" type="text"> </div> <button type="submit">Submit</button> </form> Achieving the same result with a methodology such as atomic CSS would mean composing specific margin-related classes and applying them manually in each case, including for first-child exceptions handled implicitly by * + *: <form class="Mtop(1point5)"> <div class="Mtop(0)"> <label for="one" class="Mtop(0)">Label one</label> <input id="one" name="one" type="text" class="Mtop(0point75)"> </div> <div class="Mtop(1point5)"> <label for="two" class="Mtop(0)">Label two</label> <input id="two" name="two" type="text" class="Mtop(0point75)"> </div> <button type="submit" class="Mtop(1point5)">Submit</button> </form> Bear in mind that this would only cover top margins if one is adhering to atomic CSS. You’d have to prescribe individual classes for color, background-color and a host of other properties, because atomic CSS does not leverage inheritance or element selectors. <form class="Mtop(1point5) Bdc(#ccc) P(1point5)"> <div class="Mtop(0)"> <label for="one" class="Mtop(0) C(brandColor) Fs(bold)">Label one</label> <input id="one" name="one" type="text" class="Mtop(0point75) C(brandColor) Bdc(#fff) B(2) P(1)"> </div> <div class="Mtop(1point5)"> <label for="two" class="Mtop(0) C(brandColor) Fs(bold)">Label two</label> <input id="two" name="two" type="text" class="Mtop(0point75) C(brandColor) Bdc(#fff) B(2) P(1)"> </div> <button type="submit" class="Mtop(1point5) C(#fff) Bdc(blue) P(1)">Submit</button> </form> Atomic CSS gives developers direct control over style without deferring completely to inline styles, which are not reusable like classes. By providing classes for individual properties, it reduces the duplication of declarations in the stylesheet. However, it necessitates direct intervention in the markup to achieve these ends. This requires learning and being commiting to its verbose API, as well as having to write a lot of additional HTML code. Instead, by styling arbitrary HTML elements and their spacial relationships, CSS ‘methodology’ becomes largely obsolete. You have the advantage of working with a unified design system, rather than an HTML system with a superimposed styling system to consider and maintain separately. Anyway, here’s how the structure of our CSS should look with our flow content solution in place: - global ( html) styles and enforced inheritance, - flow algorithm and exceptions (using the lobotomized owl selector), - element and attribute styles. We’ve yet to write a specific component or conceive a CSS class, but a large proportion of our styling is done — that is, if we write our classes in a sensible, reusable fashion. Utility Classes The thing about classes is that they have a global scope: Anywhere they are applied in the HTML, they are affected by the associated CSS. For many, this is seen as a drawback, because two developers working independently could write a class with the same name and negatively affect each other’s work. CSS modules were recently conceived to remedy this scenario by programmatically generating unique class names tied to their local or component scope. <!-- my module's button --> <button class="button_dysuhe027653">Press me</button> <!-- their module's button --> <button class="button_hydsth971283">Hit me</button> Ignoring the superficial ugliness of the generated code, you should be able to see where disparity between independently authored components can easily creep in: Unique identifiers are used to style similar things. The resulting interface will either be inconsistent or be consistent with much greater effort and redundancy. There’s no reason to treat common elements as unique. You should be styling the type of element, not the instance of the element. Always remember that the term “class” means “type of thing, of which there may be many.” In other words, all classes should be utility classes: reusable globally. Of course, in this example, a .button class is redundant anyway: we have the button element selector to use instead. But what if it was a special type of button? For instance, we might write a .danger class to indicate that buttons do destructive actions, like deleting data: .danger { background: #c00; color: #fff; } Because class selectors are higher in specificity than element selectors and of the same specificity as attribute selectors, any rules applied in this way will override the element and attribute rules further up in the style sheet. So, my danger button will appear red with white text, but its other properties — like padding, the focus outline, and the margin applied via the flow algorithm — will remain intact. <button class="danger">delete</button> Name clashes may happen, occasionally, if several people are working on the same code base for a long time. But there are ways of avoiding this, like, oh, I don’t know, first doing a text search to check for the existence of the name you are about to take. You never know, someone may have solved the problem you’re addressing already. Local Scope Utilities My favorite thing to do with utility classes is to set them on containers, then use this hook to affect the layout of child elements within. For example, I can quickly code up an evenly spaced, responsive, center-aligned layout for any elements: .centered { text-align: center; margin-bottom: -1rem; /* adjusts for leftover bottom margin of children */ } .centered > * { display: inline-block; margin: 0 0.5rem 1rem; } With this, I can center group list items, buttons, a combination of buttons and links, whatever. That’s thanks to the use of the > * part, which means that any immediate children of .centered will adopt these styles, in this scope, but inherit global and element styles, too. And I’ve adjusted the margins so that the elements can wrap freely without breaking the vertical rhythm set using the * + * selector above it. It’s a small amount of code that provides a generic, responsive layout solution by setting a local scope for arbitrary elements. My tiny (93B minified) flexbox-based grid system is essentially just a utility class like this one. It’s highly reusable, and because it employs flex-basis, no breakpoint intervention is needed. I just defer to flexbox’s wrapping algorithm. .fukol-grid { display: flex; flex-wrap: wrap; margin: -0.5em; /* adjusting for gutters */ } .fukol-grid > * { flex: 1 0 5em; /* The 5em part is the basis (ideal width) */ margin: 0.5em; /* Half the gutter value */ } Using BEM, you’d be encouraged to place an explicit “element” class on each grid item: <div class="fukol"> <!-- the outer container, needed for vertical rhythm --> <ul class="fukol-grid"> <li class="fukol-grid__item"></li> <li class="fukol-grid__item"></li> <li class="fukol-grid__item"></li> <li class="fukol-grid__item"></li> </ul> </div> But there’s no need. Only one identifier is required to instantiate the local scope. The items here are no more protected from outside influence than the ones in my version, targeted with > * — nor should they be. The only difference is the inflated markup. So, now we’ve started incorporating classes, but only generically, as they were intended. We’re still not styling complex components independently. Instead, we’re solving system-wide problems in a reusable fashion. Naturally, you will need to document how these classes are used in your comments. Utility classes like these take advantage of CSS’ global scope, the local scope, inheritance and the cascade simultaneously. The classes can be applied universally; they instantiate the local scope to affect just their child elements; they inherit styles not set here from the parent or global scope; and we’ve not overqualified using element or class selectors. Here’s how our cascade looks now: - global ( html) styles and enforced inheritance, - flow algorithm and exceptions (using the lobotomized owl selector), - element and attribute styles, - generic utility classes. Of course, there may never be the need to write either of these example utilities. The point is that, if the need does emerge while working on one component, the solution should be made available to all components. Always be thinking in terms of the system. Component-Specific Styles We’ve been styling components, and ways to combine components, from the beginning, so it’s tempting to leave this section blank. But it’s worth stating that any components not created from other components (right down to individual HTML elements) are necessarily over-prescribed. They are to components what IDs are to selectors and risk becoming anachronistic to the system. In fact, a good exercise is to identify complex components (“molecules,” “organisms”) by ID only and try not to use those IDs in your CSS. For example, you could place #login on your log-in form component. You shouldn’t have to use #login in your CSS with the element, attribute and flow algorithm styles in place, although you might find yourself making one or two generic utility classes that can be used in other form components. If you do use #login, it can only affect that component. It’s a reminder that you’ve moved away from developing a design system and towards the interminable occupation of merely pushing pixels. Conclusion When I tell folks that I don’t use methodologies such as BEM or tools such as CSS modules, many assume I’m writing CSS like this: header nav ul li { display: inline-block; } header nav ul li a { background: #008; } I don’t. A clear over-specification is present here, and one we should all be careful to avoid. It’s just that BEM (plus OOCSS, SMACSS, atomic CSS, etc.) are not the only ways to avoid convoluted, unmanageable CSS. In an effort to defeat specificity woes, many methodologies defer almost exclusively to the class selector. The trouble is that this leads to a proliferation of classes: cryptic ciphers that bloat the markup and that — without careful attention to documentation — can confound developers new to the in-house naming system they constitute. By using classes prolifically, you also maintain a styling system that is largely separate from your HTML system. This misappropriation of ‘separate concerns’ can lead to redundancy or, worse, can encourage inaccessibility: it’s possible to affect a visual style without affecting the accessible state along with it: <input id="my-text" aria- In place of the extensive writing and prescription of classes, I looked at some other methods: - leveraging inheritance to set a precedent for consistency; - making the most of element and attribute selectors to support transparent, standards-based composition; - applying a code- and labor-saving flow layout system; - incorporating a modest set of highly generic utility classes to solve common layout problems affecting multiple elements. All of these were put in service of creating a design system that should make writing new interface components easier and less reliant on adding new CSS code as a project matures. And this is possible not thanks to strict naming and encapsulation, but thanks to a distinct lack of it. Even if you’re not comfortable using the specific techniques I’ve recommended here, I hope this article has at least gotten you to rethink what components are. They’re not things you create in isolation. Sometimes, in the case of standard HTML elements, they’re not things you create at all. The more you compose components from components, the more accessible and visually consistent your interface will be, and with less CSS to achieve that end. There’s not much wrong with CSS. In fact, it’s remarkably good at letting you do a lot with a little. We’re just not taking advantage of that.
https://www.smashingmagazine.com/2016/11/css-inheritance-cascade-global-scope-new-old-worst-best-friends/
CC-MAIN-2021-21
refinedweb
4,516
53.1
This tutorial shows how to use the LEADTOOLS SDK in a WinForms C# application and create a project, add references, and set the license. Before any functionality from the SDK can be leveraged, a valid runtime license will have to be set. For instructions on how to obtain a runtime license refer to Obtaining a License. Launch Visual Studio and select Create a new project. Select Windows Forms App (.NET Framework) and click Next. Add the project name, specify the location where the project to be saved to and the other project details, then click Create. Add the necessary LEADTOOLS references. The references needed depend upon the purpose of the project. References can be added by one or the other of the following two methods (but not both). Right-click the C# project in the Solution Explorer and select Manage NuGet Packages... Browse for LEADTOOLS, then select any of the LEADTOOLS NuGet packages and install it. Accept LEAD's End User License Agreement. To add the LEADTOOLS references locally, perform the following steps: In the Solution Explorer, right-click References, and select Add Reference. In the Reference Manager, select Browse and then navigate to the directory where the LEADTOOLS Assemblies are located: <INSTALL_DIR>\LEADTOOLS22\Bin\Dotnet4\x64 Select and add Leadtools.dll, then click OK. Now that the LEADTOOLS references have been added to the project, coding can begin. The License unlocks the LEADTOOLS features needed for the project. It must be set before any toolkit function is called. The exact function called depends on the platform used. For details, refer to Setting a Runtime License. There are two types of runtime licenses: Right-click on Form1.cs in the Solution Explorer, and select View Code to bring up the code behind the form. Add the following statements to the using block. using System.Windows.Forms; using System.IO; using Leadtools; In the Form1 class add a new method called SetLicense() and call it in the Form1() constructor. Add the code below to properly set your LEADTOOLS runtime license. public partial class Form1 : Form { public Form1() { InitializeComponent(); SetLicense(); } static void SetLicense() { string license = @"C:\LEADTOOLS22\Support\Common\License\LEADTOOLS.LIC"; string key = File.ReadAllText(@"C:\LEADTOOLS22\Support\Common\License\LEADTOOLS.LIC.KEY"); RasterSupport.SetLicense(license, key); if (RasterSupport.KernelExpired) MessageBox.Show("License file invalid or expired."); else MessageBox.Show("License file set successfully"); } } Run the project by pressing F5, or by selecting Debug -> Start Debugging. If the steps are followed correctly, the application runs and a message box displays License file set successfully. This tutorial showed how to create a new C# .NET WinForms Project, add references via NuGet or local LEADTOOLS DLLs, and set the license. This is the basis for all C# .NET Winforms applications using the LEADTOOLS SDK. All functionality in the SDK is unlocked via setting a license. SetLicense must be called before calling any other LEADTOOLS SDK methods. After the SDK is purchased, the evaluation license can be replaced with a valid release license to disable the Nag Message. Direct Show .NET | C API | Filters Media Foundation .NET | C API | Transforms Media Streaming .NET | C API
https://www.leadtools.com/help/sdk/v22/tutorials/dotnet-winforms-add-references-and-set-a-license.html
CC-MAIN-2022-21
refinedweb
521
60.01
When you first create an instance of an object there is often the need to initialize it before using it. The obvious thing to do is to write an “init” member function and remember to call this before going on. The trouble with this idea is that you could forget to use init before getting on with the rest of the program. To help avoid this Java and most other object oriented languages use special functions called constructors that can be used to automatically initialize a class. A constructor has to have the same name as the class and it is called when you create a new object of the class. For example, if we add the constructor public Point(int a, int b) { X=a; Y=b;} public Point(int a, int b) { X=a; Y=b; } to the point class you can create and initialize a new object using Point Current=new Point(10,20); Notice the way that the values to be used in the initialization are specified. In fact you can think about the use of Point(10,20) as being a call to the constructor. As well as an initialization routine, you can also specify a clean up routine called finalize() that is called whenever an object is about to be deleted. This isn’t used as much as the constructor but it is still handy. If you don’t bother to write a constructor the Java system provides a default constructor that does nothing and takes no parameters. If you define a constructor of any sort the system doesn’t provide a default. So if you define a constructor that takes parameters, e.g. point(x,y), don’t forget to also define a parameter-less constructor, i.e. point(), if you also want to create objects without any initialization or with a default initialization. It is worth knowing at this early stage that the whole subject of initializing objects is a complicated one that has lots of different clever solutions. For the moment knowing about constructors is probably enough. The previous section introduces a new idea. You can define any number of constructors as long as each one has a different set of parameters. This means you have functions i.e. the constructors all with the same name, the name of the class, differing only by the parameters used to call them. This is an example of overloading. Although it doesn’t actually have anything directly to do with object oriented programming, and so rightly should be left until some other time, overloading is too useful to postpone. The general idea is that you can have any number of functions with the same name as long as they can be distinguished by the parameters you use to call the function. In more formal terms the “signature” of a function is the pattern of data types used to call it. For example, void Swap(int a, int b) has a signature int,int and void Swap(float a,float b) has the signature float,float, and void Swap(int a, float b) has the signature int,float and so on… When you call a function for which there is more than one definition the signature is used to sort out which one you mean. In this sense you can consider that the name of a function is not just the name that you call it by but its name and types of parameters. So Swap is really Swapintint, Swapfloatfloat and Swapintfloat and so non. But notice that this is just a handy way to think about things you can't actually use these names. For example, given the multiple “overloaded” definitions of Swap given earlier, the call Swap(x,y) where x and y are ints would call the first Swap, i.e. the one with signature int,int. As long as the function used can be matched up with definition with the same signature everything will work. What function overloading allows you to do is create what looks like a single function that works appropriately with a wide range of different types of data. You can even define functions with the same name that have different numbers of parameters. For example, Swap(int a, int b, int c) has a signature int,int,int and thus is a valid overloaded form of Swap. As already mentioned of the most common uses of overloading is to provide a number of different class constructors. For example, you could create a point(int a) constructor which only initializes the x data member or the point(float r, float t) constructor which sets x and y from a radius and an angle. Notice a function’s return type is not part of its signature. A class defines a collection of methods and properties that represents a some sort of logically coherent entity. For example a geometric point. However you soon begin to notice that some types of entity form a family tree of things. The key thing that should alerts you to this situation is when you find yourself saying an A IS A B with some extra features. In object oriented terms what this is saying is that the class that represents A is essentially B with some additional methods and properties. Overall an A behaves like a B but it has some extras - it is a bit more than a B. To accommodate this situation we allow a new class to be derived from an existing class by building on its definition. For example we have a class that represents a 2D Point and perhaps we want a 2D Point that also specifies a color. Now you could say that PointColor is just a Point with a color property. In this sense we could say that PointColor extends the idea of a Point and in Java this would be written - class PointColor extends Point{} class PointColor extends Point{ What this means exactly is that the definition of the PointColor class starts off from all of the definitions in the Point class. It is as if you have just cut and pasted all of the text in the Point definition into the PointColor definition. In more technical terms we say that PointColor inherits from Point and this is a specific example of class inheritance. If you did nothing else at all an instance of the line class would be exactly the same as the point class - it would have setX, setY, getX, getY and so on as members. In short all of the members of Point are members of PointColor. What is the good of inheritance? Well the key lies in the use of the term “extends”. You don’t have to stop with what you have inherited, you can add data and function members to the new class and these are added to what has been inherited. That is the class that you are extending acts as a basis for your new class - the class you are extending is often called the base class for this reason but the terminology varies. The key idea to get is that when a class extends another it is everything that class is and more. For example, in the case of the PointColor class we need one more variables to store the color and some get/set methods. public class PointColor extends Point { private int Color; void setColor(int color) { Color = color; } int getColor() { return Color; }} Now the new PointColor class as all of the methods of Point and a new Color property complete with get/set methods. You can use all of the methods of PointColor with no distinction between what is inherited and what newly defined. For example, PointColor PC=new PointColor();PC.setX(10);PC.setColor(3); Notice that you can't directly access the property variables such as X,Y and Color directly only via the set/get methods. This is also true for the code within PointColor. That is PointColor cannot directly access X and Y, the variables that belong to Point. They are just as much private from PointColor as they are to the rest of the world. PointColor can't get at the internals of Point. The relationship between PointColor and its base class Point is simple but it can become more complicated. Before classes and inheritance programmers did reuse code by copy and pasting the text of the code. This worked but any changes made to the original code were not passed on to the copied code. With inheritance if you change the base class every class that extends it are also changed in the same way. The inheritance is dynamic and this is both a good and a bad thing. The problem is that changing a base class might break the classes that extend it. It is for this reason that how a base class does its job should be hidden from the classes that extend it so that they cannot make use of something that might well change. This is the reason that extending classes cannot access the private variables and functions of the base class.
http://www.i-programmer.info/ebooks/modern-java/4570-java-inheritance.html?start=1
CC-MAIN-2017-51
refinedweb
1,527
67.89
G'day all. Quoting oleg at pobox.com: : Since Andrew Bromage wished for that interesting monad, perhaps he has : in mind a good example of its use. We are particularly interested in a : short example illustrating soft-cut (and, perhaps, `once'). No obvious small examples of soft cut spring to mind. (If Fergus is listening, he might have a few suggestions...) In Prolog parlance, there are three types of cut operation: - A "red cut" is anything which prunes away solutions. Red cuts are usually considered bad style because they have no logical interpretation. - A "green cut" is any cut which does not prune solutions, but which may prune different proofs of the same solution. - A "blue cut" prunes neither solutions, nor proofs. It's basically an efficiency hack, where the programmer inserts a cut to tell the Prolog implementation that some piece of code is deterministic when the implementation can't infer that. Green and blue cuts are sometimes collectively called "grue cuts". The most obvious use for "once" (which I may accidentally call "commit") is for blue cuts. This is not so useful in Haskell, but you never know. The second most obvious use is for those times when some goal isn't technically deterministic, but you never actually look at the "output". Mercury automatically inserts these commit operations if it can tell that the output of some goal is never consulted. One situation where you might use this is in negation-as-failure: gnot :: (Logic m) => m a -> m () gnot m = ifte (once m) (const gfail) (gsuccess ()) The point of the "once" is that when the "then" branch fails, the system won't backtrack into m. There's no point, since it's always going to fail. Another example of pruning is any situation where you are doing some kind of search which would normally be intractable, but you have a heuristic. If the heuristic is "safe" (that is, if whenever it can be applied, applying it results in no solutions being lost), then the cut is green. Otherwise it's red. (But that's sometimes okay; if it's an NP-hard problem, for example, you just make do with the approximation provided by the heuristic.) With soft cuts, you can express it like this: optimise curState | isGoalState curState = gsuccess success | otherwise = ifte (tryHeuristic curState) (\h -> do -- "then" case s <- nextStateWithHeuristic h curState optimise s) (do -- "else" case s <- nextState curState optimise s) The soft cut guarantees that you commit to the heuristic if it applies. As an example, here's a simple (though not THAT short) tic-tac-toe game. The solution is highly artificial, since the "next move" computation is effectively deterministic. A better example might be solving Sudoku problems, but that's harder to set up than tic-tac-toe. > {-# OPTIONS -fglasgow-exts #-} > {-# OPTIONS -fallow-undecidable-instances #-} The -fallow-undecidable-instances will be explained in a moment. > module Main where > > import Control.Monad > import Control.Monad.Trans > import LogicT > import SFKT > import Data.List OK, now the monad that most of the computation will be done in... > class (Monad m, MonadIO (t m), LogicT t, MonadPlus (t m)) => MyMonT t m > instance (Monad m, MonadIO (t m), LogicT t, MonadPlus (t m)) => MyMonT t m This is the reason for -fallow-undecidable-instances. To make the types not so unwieldy, we would ideally like typeclass synonyms, but Haskell doesn't support them. So this will have to do. > data Value = B | X | O deriving (Show, Eq, Ord) > type Player = Value We're going to overload the Value type with two meanings: It can either mean a value on the tic-tac-toe board, or it can refer to a player (either X or O). Code to switch players: > otherPlayer :: Player -> Player > otherPlayer X = O > otherPlayer O = X Code to handle the board: > data Board > = Board Value Value Value Value Value Value Value Value Value > deriving (Show, Eq, Ord) > > blankBoard :: Board > blankBoard = Board B B B B B B B B B > > -- Return true if the board is a win for player p. > win :: Value -> Board -> Bool > win p (Board a b c d e f g h i) > = (a == b && b == c && a == p) > || (d == e && e == f && d == p) > || (g == h && h == i && g == p) > || (a == d && d == g && a == p) > || (b == e && e == h && b == p) > || (c == f && f == i && c == p) > || (a == e && e == i && a == p) > || (c == e && e == g && c == p) > > draw :: Board -> Bool > draw (Board a b c d e f g h i) > = not (any (==B) [a,b,c,d,e,f,g,h,i]) We also need to encode the desirability of a board state. We do this with an enum type such that more desirable states come first in Ord. > data State = Win | Draw | Lose deriving (Show, Eq, Ord) > > otherState :: State -> State > otherState Win = Lose > otherState Lose = Win > otherState Draw = Draw A move is a number from 1 to 9 representing the cell to put your mark in: > type Move = Int > > move :: (MonadPlus m) => Move -> Player -> Board -> m Board > move 1 p (Board B b c d e f g h i) = return $ Board p b c d e f g h i > move 2 p (Board a B c d e f g h i) = return $ Board a p c d e f g h i > move 3 p (Board a b B d e f g h i) = return $ Board a b p d e f g h i > move 4 p (Board a b c B e f g h i) = return $ Board a b c p e f g h i > move 5 p (Board a b c d B f g h i) = return $ Board a b c d p f g h i > move 6 p (Board a b c d e B g h i) = return $ Board a b c d e p g h i > move 7 p (Board a b c d e f B h i) = return $ Board a b c d e f p h i > move 8 p (Board a b c d e f g B i) = return $ Board a b c d e f g p i > move 9 p (Board a b c d e f g h B) = return $ Board a b c d e f g h p > move _ _ _ = mzero Utility "axiom of choice" function: > choose :: (MonadPlus m) => [a] -> m a > choose = msum . map return Right, now on to the game. Assume that there's a function: best :: (MyMonT t m) => Player -> Board -> t m (State, Board) where best p b makes the best move for player p on board b, returning the new board as well as an estimate about how good the position is (i.e. whether it's a theoretical win, lose or draw for player p). Then here's how to play a game with the computer playing itself. X moves first. > game :: (MyMonT t m) => t m () > game > = game' X blankBoard > where > game' p b > | win X b > = liftIO (putStrLn "X wins!") > | win O b > = liftIO (putStrLn "O wins!") > | draw b > = liftIO (putStrLn "Draw!") > | otherwise > = do > (_,b') <- once (best p b) > liftIO (putStrLn $ show b') > game' (otherPlayer p) b' > > main > = observe game Now all we need to do is write "best". This is a simple problem in AI. Basically, you're trying to do a minimax search. If this is a "goal state" (i.e. an ACTUAL win, lose or draw), then we're done. If not, we examine all of the successor states, assuming that our opponent will make their best move, and we pick the one that's best for us. best :: (MyMonT t m) => Player -> Board -> t m (State, Board) best p b | win p b = return (Win, b) | win (otherPlayer p) b = return (Lose, b) | draw b = return (Draw, b) | otherwise = do wbs <- runN Nothing (do m <- choose [1..9] b' <- move m p b (w,_) <- best (otherPlayer p) b' return (otherState w,b')) let (w,b') = minimum wbs return (w, b') Unfortunately, this is too slow for interactive use. Certainly, I ran out of patience after a minute. However, thankfully there are a couple of safe heuristics which work just fine with tic-tac-toe. The first is that if you can win in this move, you should do so. The second is that if the first heuristic doesn't work, then you should see if there is any move that your opponent could make where they could win on the next move. If so, you should move to block it. The code looks like this: > best :: (MyMonT t m) => Player -> Board -> t m (State, Board) > best p b > | win p b > = return (Win, b) > | win (otherPlayer p) b > = return (Lose, b) > | draw b > = return (Draw, b) > | otherwise > = ifte (once (do > m <- choose [1..9] > b' <- move m p b > guard (win p b) > return (Win, b'))) > return > (ifte (once (do > m <- choose [1..9] > b' <- move m (otherPlayer p) b > guard (win (otherPlayer p) b') > return m)) > (\m -> do > b' <- move m p b > (w,_) <- best (otherPlayer p) b' > return (w,b')) > (do > wbs <- runN Nothing (do > m <- choose [1..9] > b' <- move m p b > (w,_) <- best (otherPlayer p) b' > return (otherState w,b')) > let (w,b') = minimum wbs > return (w, b'))) And we're done. Runs nice and fast now. Some of the interesting examples in Prolog are moot in Haskell by virtue of the fact that Haskell is "strongly moded". Having said that, there are three main areas where I find soft-cut useful. The first is directing search. People who liberally use <|> and <?> in Parsec probably know exactly what I mean by this. The second is tailoring failure. A lot of Prolog programs just fail without further comment. There was once a logic language implementation (which I won't name) whose front-end was implemented this way. Imagine if you fed your compiler a program, and the only diagnostic that you got was "Your program contains one or more errors." Soft cut gives you a logical way to specify _why_ some task failed. Well first off, the combination of soft-cut and once gives you a logical negation-as-failure: gnot :: (Logic m) => m a -> m () gnot m = ifte (once m) (\_ -> gfail) (gsuccess ()) That's a useful alternative to "guard". Cheers, Andrew Bromage
http://www.haskell.org/pipermail/haskell/2005-June/016037.html
CC-MAIN-2014-41
refinedweb
1,738
66.17
note davido <p>If this is seriously being considered (as opposed to the lively banter that goes on within the p5p list, usually leading to no change), I'm very much opposed.</p> <p.</p> <p>I totally get the usefulness of <c>//</c> and <c>//=</c>. I even get the well intentioned but possibly overly ambitious and misguided rationale that brought us <c>~~</c>..</p> <p>If it's such a great idea, let someone implement it via [mod://overload] and put its use into production. If he's still employed in 60 days, we can revisit the issue. ;) I do see that Acme::RenameTheOperators is an available namespace.</p> <p.</p> <div class="pmsig"><div class="pmsig-281137"> <br /><p>Dave</p> </div></div> 1038191 1038191
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=1038204
CC-MAIN-2014-15
refinedweb
128
64.71
Hey, I import Default.comment should work just fine. What error are you getting? ImportError: No module named 'Default' This is what I get. Hm. There must be some issue with your installation. I assume you are saving your plugin somewhere like ~/Library/Application Support/Sublime Text 3/Packages/User. I'm using build 3028. Try making a plugin that will print the Python path to see if anything looks fishy, like: import sublime, sublime_plugin #import Default.comment import sys class ExampleCommand(sublime_plugin.TextCommand): def run(self, edit): #print(dir(Default.comment)) print(sys.path) This prints the following for me: '/Applications/Sublime Text.app/Contents/MacOS', '/Applications/Sublime Text.app/Contents/MacOS/python3.3.zip', '/Users/eric/Library/Application Support/Sublime Text 3/Packages'] I'm sorry I'm replying so late. I thought I have email notifications for this thread, but I guess I don't I put the provided code as a plugin in the path you mentioned. However I don't see any output in the console. I'm not sure I have the entire picture and the documentation about plugin creation is really scarce. Half of the things are just educated guesses that I'm making Anyway thanks for the help!
https://forum.sublimetext.com/t/st3-importing-from-the-default-plugins/9744/3
CC-MAIN-2016-07
refinedweb
206
52.97
This action might not be possible to undo. Are you sure you want to continue? 01/22/2014 text original AE 305 HW-2 Group 11 28.11.2012 Ali Yıldırım-1747146 Durukan Tamkan-1747070 Elif Mehtap PELIT-1747039 k is the thermal conductivity. It is asked to find unsteady heat distribution of a turbine blade which is proposed to a hot environment due to hot gases in the combustor. .INTRODUCTION The homework problem is about visualizing the temperature distribution over a turbine which is divided into grids as: Unsteady temperature distribution of the body is described by its heat transfer equation as: The heat transfer equation is a partial differential equation and to solve this partial expression finite volume method is employed to understand temperature distribution. Some properties stated in the problem given as: The convective heat flux on the blade surface and in the cooling hole surfaces are given by: Where the chord of turbine blade L=10cm. and not at nodes or surfaces. One advantage of the finite volume method over finite difference methods is that it does not require a structured mesh (although a structured mesh can also be used. Conservation law( in integral form): Conservation law( in differential form) is: Divergence theorem is . Finite volume methods are especially powerful on coarse nonuniform grids. Since the flux entering a given volume is identical to that leaving the adjacent volume. the values of the conserved variables are located within the volume element. the method based on the conservation laws.THEORY This method is used to calculate the values of the conserved variables averaged across the volume.) Finite difference method which is mainly based on the volume and flux integrals works in such a way and can be formulated as follows: volume integrals in a partial differential are converted to surface integrals by using divergence theorem and the terms obtained from integration become flux which enters to one cell after leaving the cell in its neighbourhood. dt.1.mxn : Max number of nodes parameter (mxc=5001.Tbc(10) common /grad/ dTdx(mxc).mxn=3001) common /grid/ ncell.Read the input data and initialize the solution call INIT .mxc : Max number of cells c.mxc).for Nth boundary. Solution domain and unstructured domain is needed.dTdy(mxc) data mxstep.Tcell(mxc).neigh(3. Sample of the solution domain is as follows: Tbc(1)= 1200 Tbc(2)= 200 Tbc(3)= 200 Tbc(4)= 200 For the solution flux terms are based on cell averaged variables and the FORTRAN program is written as follows: c..0. It is assumed that constant temperature distribution for each triangular cell..1000/ dt.mxc).area(mxc) common /var/ time. > xy(2.iostep/7000.node(3.delTallow/0.mxn). Where: gives the fluxes at cell boundaries..nnode.01/ c. Tbc(10) character fn*16 logical ok . do n = 1. ' Nstep.Output the final solution call TECout(nstep) stop 'FINE' end subroutine INIT parameter (mxc=5001.and.mxn=3001) common /grid/ ncell. nstep = 0 DO WHILE (nstep .area(mxc) common /var/ time.c..and..Start the solution loop delTmax = 1. delTallow) nstep = nstep + 1 time = time + dt c.ne. delTmax .Sweep all the cells and solve for T^n+1 delTmax = 0.ncell c.eq.iostep) ..Evaluate temperature gradients for each cell call GRADIENT c. mxstep ) > call TECout(nstep) ENDDO c. 0 .neigh(3..dt.gt.Output the intermediate solutions if( mod(nstep. > xy(2. delTmax) enddo print*.mxc).mxn). nstep .delT delTmax = max(abs(delT).node(3.nnode.mxc). Time..delTmax c..time. DelTmax :'.nstep.Evaluate temperature change for the cell delT = dt/area(n) * FLUX(n) Tcell(n) = Tcell(n) .lt. mxstep .Tcell(mxc). n) n3 = node(3.Read the grid data fn='tblade.Initialize the solution Tic = 25.*) (no.n=1.(xy(i.n1))*(xy(1. fn..i=1.n1)) > (xy(2. ok ) then print*.n3)-xy(1. ' # of cells :'.3)...ncell print*.(node(i.n2)-xy(2.i=1.(neigh(i. ' Reading '.n=1.n1)) ) enddo c.Set Initial and Boundary Conditions Tbc(1)= 1200.fn open(5.ncell Tcell(n) = Tic .*) ncell.n).5*((xy(1.n3)-xy(2.ncell n1 = node(1.n1))*(xy(2.*) (no.nnode c.not.2).form='formatted') read(5. ' '. ' # of nodes :'.3).c.Compute cell areas do n = 1.n) area(n) = 0.n). do n =1.ncell) close(5) print*.dat' inquire(FILE=fn.EXIST=ok) if( .nnode read(5. c. Tbc(2)= 200.file=fn. Tbc(4)= 200.nnode) read(5. ' does not exist! \n\n' stop endif print*.n2)-xy(1.i=1.n). Tbc(3)= 200..n) n2 = node(2. n2)-xy(2..mxc).gt.n2)-xy(1.dTdy(mxc) DO n = 1.n) if(ne .n1) dy = xy(2.5*(Tcell(n)+Tneigh) dTdx(n) = dTdx(n) + Tface*dy dTdy(n) = dTdy(n) .Tbc(10) common /grad/ dTdx(mxc). 3) then n2=node(nf+1.mxc).enddo call TECout(0) return end subroutine GRADIENT parameter (mxc=5001.neigh(3.n) endif dx = xy(1. do nf = 1.mxn=3001) common /grid/ ncell.area(mxc) common /var/ time.n1) ne = neigh(nf.dt.n) else n2=node(1.mxn).walls !.Tcell(mxc).n) if(nf .lt. 0) then Tneigh = Tcell(ne) else !.nnode.real neighbor Tneigh = Tbc(-ne) endif Tface = 0. dTdy(n) = 0..ncell dTdx(n) = 0.3 n1 = node(nf.Tface*dx enddo dTdx(n) = dTdx(n)/area(n) . > xy(2.node(3. flux_y*dx) enddo FLUX = -alpha*FLUX return .5 else !.n2)-xy(2.node(3.n) if(ne .mxc).n1) dy = xy(2.n) if(nf .mxn=3001) common /grid/ ncell..Tcell(mxc). > xy(2.dTdy(mxc) data alpha /22.lt.real neighbor flux_x = (dTdx(n)+dTdx(ne))*0. 0) then !.nnode.n) else n2=node(1.gt..n) endif dx = xy(1.5 endif FLUX = FLUX + (flux_x*dy .neigh(3.mxn).dt.Tbc(10) common /grad/ dTdx(mxc).Sum surface fluxes over the cell faces do nf = 1.walls flux_x = dTdx(n)*0.area(mxc) common /var/ time. 3) then n2=node(nf+1.5 flux_y = (dTdy(n)+dTdy(ne))*0.n1) ne = neigh(nf.5E-6/ FLUX = 0.dTdy(n) = dTdy(n)/area(n) ENDDO return end function FLUX(n) parameter (mxc=5001..3 n1 = node(nf. c.mxc).n2)-xy(1.5 flux_y = dTdy(n)*0. Tbc(10) real Tnode(mxn) integer npass(mxn) . > xy(2.nnode. "TEMPERATURE"'/. form='formatted') write(5. > ' ZONE N='.Tcell(mxc).100) nnode.file=fname.area(mxc) common /var/ time.' F=FEPOINT '.node(3.ncell write(5.mxc).nnode.5)') float(nstep)/100000 read(string.Set the output file name write(string. > xy(2.n).mxc).102) (node(1.Tcell(mxc).n=1.n).101) (xy(1.5)) 102 format (3(1x.n).area(mxc) common /var/ time.neigh(3.node(2..a5)') ext fname = 'temp-'//ext//'.mxn). I6.node(3.mxn).end subroutine TECout(nstep) parameter (mxc=5001...' E='.ext*5 c.mxn=3001) common /grid/ ncell. I6.n).dt.string*8..'(3x.e12.dt.' ET=triangle' ) 101 format (3(1x.i6)) return end subroutine TEMPNODE(Tnode) c.Evaluate average temperatures at nodes call TEMPNODE(Tnode) c.ncell) close(5) 100 format (' VARIABLES= "X". "Y".neigh(3.Tnode(n).mxn=3001) common /grid/ ncell.mxc).tec' c.node(3.n).xy(2.Tbc(10) real Tnode(mxn) character fname*32.Evaluate node temperatures by averaging the cell temperatures parameter (mxc=5001.mxc).Output the solution and the grid in TECPLOT format open(5.nnode) write(5.'(f8.n=1. Find the contribution of cells to the node temperatures do n=1.. As we can see blade cooling hole is not enough to provide keep all blade cool.3 nn = node(nf.nnode Tnode(n)=Tnode(n)/npass(n) enddo return end RESULTS & DISCUSSION With one cooling hole We plotted the graph while our mx step 7000 and iostep 1000. This means we will get 7 temperature distributions with one initial state situation. While time is increasing. . Nevertheless we can say cooling hole is enough to keep their neighbor environment cool. npass(n) = 0 enddo c.Average the total node temperature with # of contributing cells do n=1..do n=1. blade is getting hotter from starting from aft.ncell do nf=1.nnode Tnode(n) = 0.n) Tnode(nn)=Tnode(nn)+Tcell(n) npass(nn)=npass(nn)+1 enddo enddo c. . . When a second cooling hole is added temperature distributions are as follows: We changed the given blade. By adding all these information into blade. .It can be observed with considering the temperature scale that due to hot gases turbine blades gets hotter and firstly blade tips are effected. Secondly we changed the first hole’s place on coordinate system by changing x and y points of first hole. First hole has been kept constant to be able to add a second cooling hole onto turbine blade.d we obtained this illustration as shown in below.d file to put second cooling hole as follows. . Especially on the middle part of turbine blade temperature seems low if we compare with one cooling hole.mathematik. REFERENCES. Steady State Temperature distribution After many step solution steady-state condition has been reached.uni-dortmund. In steady state temperature distrubition turbine blades seem in their neutral situation. In this case we couldn’t observe too much difference between one hole and double hole blades.de/~kuzmin/cfdintro/lecture5.With the second hole temperature distrubutions seems more uniform and the blade affected from hot gases less when the temperature scale is considered. Therefore. we can briefly say second cooling hole is usefull to keep temperature less in middle section. Then the temperature distrubition keeps its stability over the turbine blade surface.pdf Lecture notes . This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/114850973/AE-305-HW-4-final
CC-MAIN-2015-48
refinedweb
1,635
54.49
Description Microsoft has done an outstanding job documenting the .NET Framework and providing a wealth of examples. The majority of all the examples provided will show a developer in detail how to utilize the Framework or concept in question, that is if you're a VB or C# programmer! What about NetCOBOL? Well Fujitsu Software is working on creating samples and making them accessible to the general public. In the meantime however, what's a COBOL developer supposed to do to solve a problem you currently have? In this article we'll take a look at some C# samples of code and then translate those samples into COBOL.This article is intended to be used as a model for you to follow when you run into a C# example and need to convert it to COBOL. A really neat trick! One of the brightest people I know of has assisted Fujitsu Software in creating a dual set of CD's titled "Microsoft .NET for COBOL Programmers". (This set of CD's is currently available from Fujitsu Software). The author of the CD's is Howard Hinman, President and CEO of Hinman Consulting. Howard had a very good suggestion that I used while learning the .NET environment, create a duplicate project in either VB.NET or C#. During your development phase, create an additional project in your solution using either of Microsoft's languages. Howard is more comfortable with VB so he chose VB.NET, I am more comfortable with C# so I use it. What we both do however is create an additional project in our COBOL solution. If I am creating a WinForm application then I create another WinForm application in C#. As I am going through the COBOL development process if I have any questions on how a specific namespace is implemented I use the other language project to help me out. Now don't worry if you don't know C# at this point. You will only be using it to try and help determine what a certain objects method and properties are. In C# (and VB.NET) a feature called 'intellisense' has been implemented. Intellisense provides a programmer with additional information about an object when they are using that object. We'll get to more of this in a little bit. Syntactical Differences C# uses an OBJECT-DOT-PROPERTY syntax. The object in question may be a WinForm, a text box or any of the over 5,000 namespaces within the Framework. The key is when a developer presses the dot (.) . When the dot is pressed a context sensitive window appears showing the developer what options are available for that particular object. The window may contain other objects, properties or methods that are accessible to the object. Instead of remembering where a specific property or method is within an object, by using intellisense a developer can 'drill down' to the property or method they are looking for. For instance, the following code shows intellisense for a command button object. VS.NET is waiting for the developer to select an item or begin typing. When a developer begins typing intellisense will position the list based on the input. So instead of keying say 'Text', the developer would begin keying in 'T' and intellisense will move the list to show all items beginning with 'T'. Notice also the presence of tool-tip text to inform the developer of what they are selecting. We are going to modify the text on the face of the command button to be "Call COBOL". To do this we need to select the 'text' property and set it to "Call COBOL". The following image details the code necessary to accomplish this. (We are interested in converting the code to COBOL so we will complete the C# code for you to compare). COBOL utilizes a different syntax to call a method of an object or update it's properties. The syntax used to set a property to value is SET PROP-WHATEVER OF OBJECT TO SOMETHING. The syntax used to invoke a method is INVOKE OBJECT "METHOD" USING (any parameters) RETURNING (any parameters). The items in blue are required and are case sensitive. But what if the property or method you are looking for is contained within another object referenced by the current object? Simple, add another 'OF OBJECT' phrase to the statement. We'll assume you are going to update a property of an object that is called from another object. The syntax would be similar to: SET PROP-WHATEVER OF OBJECT1 OF OBJECT2 TO SOMETHING. Now let's convert our sample. Conversion Following the guidelines detailed in the preceding section, to update a button text in COBOL we would code the following: SET PROP-TEXT OF button1 TO "CALL COBOL". The completed method is: Notice the use of the field PROP-TEXT. PROP-TEXT is a pseudonym for the actual 'Text' property value for the control. This is defined in the repository of the class and has the following syntax: In COBOL, we use the AS clause to create aliases for names that cannot otherwise be written in COBOL. In other words, you create a name friendly to COBOL and put the external name that might not be so COBOL friendly in quotes after AS. Wrap-Up While the sample presented is a simple one, the theory behind it flows through to all of the NetCOBOL for .NET environment. Research what you are attempting to accomplish. Locate the objects, property and method you are attempting to use and then create either a C# or VB.NET project, if possible. Create the programming necessary to do what you are attempting and see how it is done in C# if no other sample are available. Next, copy the line of code into your COBOL project (commenting it out of course) and then following the above guidelines, translate it into COBOL. Over time this process will become second nature to you and soon you'll be coding native .NET calls without doing the research. Remember, one has to learn to crawl before you can run a marathon! Happy Coding! View All
http://www.c-sharpcorner.com/article/converting-C-Sharp-to-cobol/
CC-MAIN-2017-09
refinedweb
1,026
64.61
I searched, but it was difficult to find an article about this topic, so I asked a question. What I do not know is as follows. Whether a pointer is inserted into std :: atomicfor indirect reference, is it possible to access the contents of the pointer? (18/08/30 12:17 postscript) Only the atomic variable of the pointer can access the reference destination. Thanks for any help! :) # include<iostream> #include<atomic> #include<thread> struct mystruct { ~ mystruct () {value = 0;} int value = 0; }; class myclass { mystruct m_mystruct; std :: atomic<mystruct *>m_atomic_mystruct_ptr; public: myclass () {m_atomic_mystruct_ptr.store (&m_mystruct);} mystruct&get_mystruct () {return * m_atomic_mystruct_ptr.load ();} void mystruct_worker (std :: size_t num) { while (num->0) m_atomic_mystruct_ptr.load ()->value ++; } }; int main () { myclass m; std :: thread t (&myclass :: mystruct_worker,&m, 10000); std :: this_thread :: sleep_for (std :: chrono :: nanoseconds (10)); int i = m.get_mystruct (). value;// is this thread-safe? std :: cout<<i<<"\ n"; t.joinable ()? t.join (): t.detach ();// (13:01 postscript) Simply t.join () is OK! } - Answer # 1 - Answer # 2 Hello. I have never thought of using it, but it is impossible. The atmic variable is wrapped in an atminc template and is gorgeous (not known in detail) so that it can be accessed atomically. When a pointer is an atmic variable, it is guaranteed to be atomically accessible as defined because the pointer itself is wrapped properly. However, the point is naturally unwrapped and naked. So there is no way to "go gnoyo", so it is not guaranteed that it can be accessed atomically. Related articles - c ++ - about atcoder environment construction - c ++ - about exclusive processing using lock_guard - about the priority of css when reading external html - c ++ - about the node deletion function of the tree structure editing program - c ++ - questions about qt5 slots - [python] about reading csv and obtaining the average - c ++ - about #ifndef, #define, and #endif macros - a quick question about c ++ - unity - about reference using assetdatabaseloadassetatpath - about c ++ compilation with vs code for mac >> error - c ++ - about standard input in visual studio code - c ++ - i want to pass a vector by reference to an array what is a member function? begin ()? - c ++ - about the sort program - about alternate reference of variables on regular functions of python 27 and functions called on rpc - c ++ - solution of forming reference to void and others - c ++ - about exclusion of child functions - i have a question about type conversion (c ++) - about c ++ pair - c ++ - about meaningless definitions in typedef - java - about deep copy of clone () method in reference type - No. The atomicity (thread safety) guaranteed by std :: atomic<T *>pis only a write/read operation to the pointer pitself. The thread safety of the Ttype object of the * ppointed to by the pointer must be handled by the Ttype itself. The question posting code causes a data race in my_struct :: value, so the program is undefined. It is not directly related to the subject matter, but I was interested in the place of the posting code. Since there is always joinable, it would be enough to write t.join ();. Also, if t.joinable ()is false, t.detach ()will throw an exception std :: system_error(expected Is n’t it? "
https://www.tutorialfor.com/questions-100737.htm
CC-MAIN-2020-40
refinedweb
516
54.42
This edition of the Gamma Manual, last updated 20 March 2010, documents ‘Gamma’ Version 2.0. ‘Gamma’ is a collection of assorted Guile modules. Version 2.0 provides a ‘syslog’ interface, a module for interfacing with SQL (more precisely: MySQL and PostgreSQL) databases and a module for writing XML parsers, The ‘(gamma syslog)’ module provides bindings for ‘syslog’ functions: Opens a connection to the system logger for Guile program. Arguments have the same meaning as in openlog(3): Syslog tag: a string that will be prepended to every message. Flags that control the operation. A logical or ( logior) of one or more of the following: Write directly to system console if there is an error while sending to system logger. Open the connection immediately (normally, the opening is delayed until when the first message is logged). Don't wait for child processes that may have been created while logging the message. The converse of ‘LOG_NDELAY’; opening of the connection is delayed until syslog is called. This is the default. Print to stderr as well. This constant may be absent if the underlying implementation does not support it. Include PID with each message. Specifies what type of program is logging the message. The facility must be one of: Example: Returns the tag, used in the recent call to openlog. Distribute a message via syslogd. The text supplies the message text. The prio specifies priority of the message. Its value must be one of the following: Example: The priority argument may also be ‘OR’ed with a facility value, to override the one set by the openlog function, e.g.: It is common to use the format function to prepare the value of the text argument: Create a syslog port for the given priority. Syslog port is a special output port such that any writes to it are transferred to the syslog with the given priority. The port is line buffered. For example, the following code: results in sending the string ‘A test message’ to the syslog priority LOG_ERR. Return #t if openlog was previously called. Close the logging channel. The use of this function is optional. The ‘(gamma sql)’ module provides interface with MySQL and PostgreSQL database management systems. Usage: This function opens a connection to the SQL server and returns a connection object. This object is then used as argument to sql-query and sql-close-connection functions. The params argument supplies the connection parameters. It is a list of conses, each of which is composed from a keyword and a value. Defines the type of the SQL interface. Valid values are: ‘"mysql"’, to connect to a MySQL server, and ‘"pgsql"’, to connect to a Postgres server. Defines server host name. The value is a string, containing the host name or ASCII representation of the host IP address. Defines the port number server is listening on. The value is a decimal port number. If the SQL server is listening on a socket, this keyword defines the UNIX pathname of the socket. This keyword cannot be used together with ‘#:host’ or ‘#:port’ keyword pairs. Sets the SQL user name. Sets the SQL user password. Sets the database name. Defines full pathname of the SSL certificate to use. If this keyword is present, the connection with the server will be encrypted using SSL. Currently it is implemented only for MySQL connections. Use the specified MySQL configuration file to obtain missing parameters. Obtain missing parameters from the specified group in the MySQL configuration file (see ‘#:config-file’, above). Close the SQL connection. The conn must be a connection descriptor returned from a previous call to sql-open-connection. Conn is a connection descriptor returned from a previous call to sql-open-connection, and query is a valid SQL query. This function executes the query and returns its results. If query is a SELECT query (or a similar query, returning tuples), the return is a list, each element of which is a list representing a row. Elements of each row (columns) are string values. If query results in some modifications to the database (e.g. an UPDATE statement), the sql-query function returns the number of affected database rows. An error of this type is raised when any of the above functions fails. Two arguments are supplied: a string describing the error, and error message from the underlying SQL implementation. This syntax executes the Scheme expression expr and calls handler if a gsql-error exception occurs. In its second form, sql-catch-failure calls a function named sql-error-handler if a sql-error exception occurs. The sql-error-handler must be declared by the user. The error handler must be declared as follows: where: The error key (‘sql-error’). Name of the Scheme function that encountered the error. Format string suitable for format. Arguments to fmt. Interface-specific error description. It is a list consisting of two elements. The first element is an integer code of the error, if supported by the underlying implementation, or #f if not. The second element is a textual description of the error obtained from the underlying implementation. For example: Evaluates Scheme expression expr and returns the result of evaluation, or value if a gsql-error exception occurs. In its second form, returns #f in case of error. The ‘(gamma expat)’ module provides interface to libexpat, a library for parsing XML documents. See, for a description of the library. Usage: Parsing of XML documents using Expat is based on user-defined callback functions. You create a parser object, and associate callback (or handler) functions with the events he is interested in. Such events may be, for instance, encountering of a open or closing tag, encountering of a comment block, etc. Once the parser object is ready, you start feeding the document to it. As the parser recognizes XML constructs, it calls the callbacks that are registered for them. Parsers are created using xml-make-parser function. In the simplest case, it takes no arguments, e.g.: The function xml-parse takes the parser as its argument, reads the document from the current input stream and feeds it to the parser. Thus, the simplest program for parsing XML documents is: This program is perhaps not so useful, but you may already use it to check whether its input is a correctly formed XML document. If xml-parse encounters an error, it signals the gamma-xml-error error. See section error handling, for a discussion on how to handle it. The xml-make-parser function takes optional arguments, which allow to set callback functions for the new parser. For example, the following code sets function ‘elt-start’ as a handler for start elements: The #:start-element-handler keyword informs the function that the argument following it is a handler for start XML documents. Any number of handlers may be set this way, e.g.: Definitions of particular handler functions differ depending on their purpose, i.e. on the event they are defined to handle. For example, a start element handler must be defined as having two arguments. First of them is the name of the tag, and the second one is a list of attributes supplied for that tag. Thus, for example, the following start handler prints the tag and the number of attributes: For a detailed description of all available handlers and handler keywords, see Expat Handlers. To further improve our example, suppose you need a program that will take an XML document as its input and create a description of its structure on output, showing element nesting levels by indenting their description. Here is how to write it. First, define handlers for start and end elements. Start element handler will print two indenting spaces for each level of ancestor elements, followed by the element name and its attributes and a newline. It will then increase the global level variable: The handler for end tags is simpler: it must only decrease the level: Finally, create a parser and parse the input: Gamma provides several functions for creating and modifying XML parsers. The xml-primitive-make-parser and xml-primitive-set-handler are lower level interfaces, provided for those who wish to further extend Gamma functionality. Higher level interfaces are xml-make-parser and xml-set-handler which we recommend for regular users. Return a new XML parser. If enc is given, it must be one of: ‘US-ASCII’, ‘UTF-8’, ‘UTF-16’, ‘ISO-8859-1’. If sep is given, the returned parser has namespace processing in effect. In that case, sep is a character which is used as a separator between the namespace URI and the local part of the name in returned namespace element and attribute names. Set the encoding to be used by the parser. The latter must be a value returned from a previous call to xml-primitive-make-parser or xml-make-parser. The sequence: is equivalent to: and to: Set XML handler for an event. Arguments are: A valid XML parser A key, identifying the event. For example, ‘#:start-element-handler’ sets handler which is called for start tags. See section Expat Handlers, for its values and their meaning. Handler procedure. Sets several handlers at once. Optional arguments (args) are constructed of keywords (as described in see handler-keyword), followed by their arguments, for example: Create a parser and set its handlers. Optional enc and sep have the same meaning as in xml-primitive-make-parser. The rest of arguments define handlers for the new parser. They must be supplied in pairs: a keyword (as described in see handler-keyword), followed by its argument. For example: This call creates a new parser for documents in ‘US-ASCII’ encoding and sets two handlers: for element start and for element end. This call is equivalent to: Parse next piece of input. Arguments are: A parser returned from a previous call to xml-primitive-make-parser or xml-make-parser. A piece of input text. Boolean value indicating whether input is the last part of input. Equivalent to: unless input is an end-of-file object, in which case it is equivalent to: Reads XML input from port (or the standard input port, if it is not given) and parses it using xml-primitive-parse. When encountering an error. the ‘gamma xml’ functions use Guile error reporting mechanism (see Procedures for Signaling Errors: (guile)Error Reporting section `Error Reporting' in The Guile Reference Manual). The error key indicates what type of error it was, and the rest of arguments supply additional information about the error. Recommended ways for handling errors in Guile are described in How to Handle Errors: (guile)Handling Errors section `Handling Errors' in The Guile Reference Manual). In this chapter we will describe how to handle errors in XML input and other errors reported by the underlying ‘libexpat’ library. An error of this type is signalled when a of ‘gamma xml’ functions encounters an XML-related error. The arguments supplied with this error are: The error key ( gamma-xml-error). Name of the function that generated the error. Format string Arguments for ‘fmt’. Error description. If there are no additional information, it is #f. Otherwise it is a list of 5 elements which describes the error and its location in the input stream: A special syntax is provided to extract parts of the ‘descr’ list: Extract from descr the part identified by key. Use this macro in the error handlers. Valid values for key are: Return the error code. Return line number. Return column number. Return #t if the description has context part. Use the two keywords below only if returned #t. Return context string. Return the location within #:context where the error occurred. If no special handler is set, the default guile error handler displays the error and its approximate location on the standard error port. For example, given the following input file: the ‘xmlck.scm’ (see xmlck.scm) produces: To provide a more detailed diagnostics, catch the gamma-xml-error code and use information from the ‘descr’ list. For example: When applied to the same input document as in the previous example, this code produces: This section describes all available element handlers. For clarity, each handler is described in its own subsection. For each handler, we indicate a keyword that is used when registering this handler and the handler prototype. To register handlers, use xml-make-parser or xml-set-handler functions. See section Creating XML Parsers, for a detailed discussion of these functions. Sets handler for start (and empty) tags. The handler must be defined as follows: Arguments: Element name. A list of element attributes. Each attribute is represented by a cons (‘car’ holds attribute name, ‘cdr’ holds its value). Sets handler for end (and empty) tags. An empty tag generates a call to both start and end handlers (in that order). The handler must be defined as follows: Arguments: Element name Sets a text handler. A single block of contiguous text free of markup may result in a sequence of calls to this handler. So, if you are searching for a pattern in the text, it may be split across calls to this handler. The handler itself is defined as: Arguments: The text. Set a handler for processing instructions. Arguments are: First word in the processing instruction. The rest of the characters in the processing instruction, after target and whitespace following it. Sets a handler for comments. The text inside the comment delimiters. Sets a handler that gets called at the beginning of a CDATA section. The handler is defined as follows: Sets a handler that gets called at the end of a CDATA section. The handler is defined as: Sets a handler for any characters in the document which wouldn't otherwise be handled. This includes both data for which no handlers can be set (like some kinds of DTD declarations) and data which could be reported but which currently has no handler set. A string containing all non-handled characters, which are passed exactly as they were present in the input ‘default’ handler has the side effect of turning off expansion of references to internally defined general entities. Such references are passed to the default handler verbatim. This sets a default handler as above, but does not inhibit the expansion of internal entity references. Any entity references are not passed to the handler. The handler prototype is the same as in default-handler. Set a skipped entity handler, i.e. a handler which is called if: Arguments are: Name of the entity. This argument is #t if the entity is a parameter, and #f otherwise. Set a handler to be called when a namespace is declared. Arguments: Namespace prefix. Namespace URI. Set a handler to be called when leaving the scope of a namespace declaration. This will be called, for each namespace declaration, after the handler for the end tag of the element in which the namespace was declared. The handler prototype is: Sets a handler that is called for XML declarations and also for text declarations discovered in external entities. Arguments: Version specification (string), or #f, for text declarations. Encoding. May be #f. ‘Unspecified’, if there was no standalone parameter in the declaration. Otherwise, #t or #f depending on whether it was given as ‘yes’ or ‘no’. Set a handler that is called at the start of a ‘DOCTYPE’ declaration, before any external or internal subset is parsed. Arguments: Declaration name. System ID. May be #f. Public ID. May be #f. #t if the ‘DOCTYPE’ declaration has an internal subset, #f otherwise. Set a handler that is called at the end of a ‘DOCTYPE’ declaration, after parsing any external subset. The handler takes no arguments: Sets a handler for ‘attlist’ declarations in the DTD. This handler is called for each attribute, which means, in particular, that a single attlist declaration with multiple attributes causes multiple calls to this handler. The handler prototype is: Argument: Name of the element for which the attribute is being declared. Attribute name. Default value, if el-name is a ‘#FIXED’ attribute, #t, if it is a ‘#REQUIRED’ attribute, and #f, if it is a ‘#IMPLIED’ attribute. Sets a handler that will be called for all entity declarations. Arguments: Entity name. For parameter entities, #t. Otherwise, #f. For internal entities, entity value. Otherwise, #f. Base. System ID. For internal entities – #f. Public ID. For internal entities – #f. Notation name, for unparsed entity declarations. Otherwise, #f. Unparsed are entity declarations that have a notation (‘NDATA’) field, such as: Sets a handler that receives notation declarations. Handler prototype is: Sets a handler that is called if the document is not standalone, i.e. when there is an external subset or a reference to a parameter entity, but does not have ‘standalone’ set to "yes" in an XML declaration. The handler takes no arguments: Return the version of the expat library as a string. For example: Return the version of the expat library as a triplet: ‘(major minor micro)’. For example: Pass current markup to the default handler (see section default-handler). This function may be called only from a callback handler. Return a textual description corresponding to the code argument. See catching gamma-xml-error, for an example of using this function. Return number of the current input line in parser. Input lines are numbered from ‘1’. Return number of column in the current input line. Return the number of bytes in the current event. Returns ‘0’ if the event is inside a reference to an internal entity and for the end-tag event for empty element tags (the later can be used to distinguish empty-element tags from empty elements using separate start and end tags). If you think you've found a bug, please report it to gray+gamma@gnu.org.ua. Be sure to include maximum information needed to reliably reproduce it, or at least to analyze it. The information needed is: This document was generated by Sergey Poznyakoff on March, 24 2010.
http://puszcza.gnu.org.ua/software/gamma/manual/gamma.html
CC-MAIN-2018-26
refinedweb
3,001
57.98
.. One of the new templates added in Whidbey's BCL is Nullable, which allows defining Nullable providing "nullness" on value types. The usage is very simple ){}In the method body any one will query if the value is a ValueType or a ReferenceType. Now we have to ask for the instance to see if it's an Nullable<> instance. But what is the template implementation that will be used to query using the "is" keyword, there's no way to use the template as an abstract base class, so we can't know what is the type without using Reflection.I have some solution for this. If the Nullable is declared using an interface defining the property HasValue, we can solve the problem because any template implementation will also implement the interface and "is" can be used to query the type of the reference.public interface INullable{ bool HasValue{ get; set; } }public class Nullable : IINullable{ // implementation} ){} public interface INullable{ bool HasValue{ get; set; } } public class Nullable : IINullable{ // implementation} try { JavaObject test = new JavaObject( "TestClass", "()V", null ); test.CallMethod( "Do", "()V", null ); } catch( JVMException e ) { Console.WriteLine( e.ToString() ); } try { DotNetObject dno = new DotNetObject( "mgdsample.dll", "mgd.Sample", new Object[] { new int[] { 23 , 32 } } ); dno.CallVoidMethod( "Metodo", new Object[] { new String[] { "AVER", "SALIR" } } ); } catch( CLRException e ) { System.out.println( e.Message ); } //
http://weblogs.asp.net/DGonzalez/
crawl-002
refinedweb
220
52.6
Add Tailwind to a Gatsby Gatsby app We'll keep things simple and generate a Gatsby app using npx and the very barebone Hello World starter. npx gatsby new gatsby-tailwind Once that has installed be sure to change into the newly created gatsby-tailwind project directory. Add Tailwind to the project Install the required dependencies Install the development dependencies from our project root. npm install --save-dev tailwindcss autoprefixer Tailwind doesn't include any vendor prefixes out of the box so we'll be relying on autoprefixer to generate those for our final build. Create the Tailwind CSS I usually prefer to create a dedicated styles directory under src/styles when working on a project with Tailwind. Create the directory and our global CSS file now. mkdir -p src/styles && touch src/styles/main.css To add Tailwind to our Gatsby project, we need simply add the @tailwind directives inside src/styles/main.css. These directives will be replaced with the actual Tailwind CSS during the build. @tailwind base; /* Your own custom base styles */ @tailwind components; /* Your own custom component styles */ @tailwind utilities; /* Your own custom utilities */ This src/styles/main.css file will be our global CSS file for the entire app. It will contain There are two options to import a global CSS file into our app: - Import it via a gatsby-browser.jsconfiguration file. - Import it into a Layout higher order component. To keep things simple we'll cover the gatsby-browser.js option in this article. Create a gatsby-browser.js file in your project root. touch gatsby-browser.js Now open the gatsby-browser.js file you just created and simply import the pre-generated Tailwind CSS file to finish the setup. import './src/styles/main.css'; Personally, I usually use a Layout Component as my preferred approach. If you're curious you can learn more about using a layout component in Gatsby. If you go the layout component route you then need to import src/styles/main.css into that file only and not the gatsby-browser.js configuration file. Create the PostCSS configuration One of the strengths of Gatsby is the rich plugin ecosystem that already exists to easily add any extra functionality we may require. Install the PostCSS plugin for Gatsby. npm install --save gatsby-plugin-postcss Activate the plugin by adding 'gatsby-plugin-postcss' to the plugins array within the gatsby-config.js file in the project root. module.exports = { plugins: ['gatsby-plugin-postcss'], }; Next we need to create a postcss.config.js file in the root of your project. touch postcss.config.js And populate the file with our PostCSS configuration. module.exports = { plugins: [require('tailwindcss'), require( At this stage we have only imported the framework but aren't actually using any of the Tailwind classes in our application. Replace src/pages/index.js with the below code. Notice that we have now made use of a few of Tailwind's classes such as text-5xl and mx-auto. import React from 'react'; function Index() { return ( <div className="container mx-auto"> <main className="text-center"> <h1 className="text-5xl font-semibold my-5"> Welcome to{' '} <a href="" className="text-purple-600"> Gatsby.js! </a> </h1> <p className="description"> Get started by editing{' '} <code className="font-mono bg-gray-200 p-1 rounded"> src/pages/index.js </code> </p> </main> </div> ); } export default Index; Lets run a quick production build. npm run build Once that has completed open the generated CSS found in the newly created public directory. This generated CSS file is absolutely massive and contains the entirety of the framework including tons of styles we haven't used at all! Tailwind will even warn us that our styles are not being purged during the build process. This is expected behaviour at this point given our settings.: ['./src/**/*.js'], theme: { extend: {}, }, variants: {}, plugins: [], }; Now that we have the purge settings in place, Tailwind will look through those files and treeshake any unused styles from the final production build. We have only set Tailwind to look through JavaScript files inside of our src directory. Be sure to tailor this to your own requirements such as adding posts/**/*.mdx if you start using MDX for your local posts.. Gatsby's rich plugin ecosystem makes it super easy to include Tailwind in your next build so be sure to give it a try.
https://www.39digits.com/add-tailwind-to-gatsby/
CC-MAIN-2020-34
refinedweb
731
57.27
public class | source import Navigation from 'flarum/components/Navigation.js' Navigation Extends: flarum/Component~Component → Navigation The Navigation component displays a set of navigation buttons. Typically this is just a back button which pops the app's History. If the user is on the root page and there is no history to pop, then in some instances it may show a button that toggles the app's drawer. If the app has a pane, it will also include a 'pin' button which toggles the pinned state of the pane. Accepts the following props: classNameThe name of a class to set on the root element. drawerWhether or not to show a button to toggle the app's drawer if there is no more history to pop.
https://api.flarum.dev/js/v0.1.0-beta.3/class/js/lib/components/Navigation.js~Navigation.html
CC-MAIN-2019-43
refinedweb
125
54.52
The QAsciiDict class is a template class that provides a dictionary based on char* keys. More... #include <qasciidict.h> Inherits QGDict. List of all member functions. char*keys. QAsciiDict is implemented as a template class. Define a template instance QAsciiDict<X> to create a dictionary that operates on pointers to X, or X*. A dictionary is a collection that associates an item with a key. The key is used for inserting and looking up an item. QAsciiDict has char* keys. QAsciiDict cannot handle Unicode keys, instead use the QDict template which uses QString keys. A QDict has the same performace as a QAsciiDict. The dictionary has very fast insertion and lookup. Example: #include <qdict.h> #include <stdio.h> void main() { // Creates a dictionary that maps char* ==> char* (case insensitive) QAsciiDict<char> dict( 17, FALSE ); dict.insert( "France", "Paris" ); dict.insert( "Russia", "Moscow" ); dict.insert( "Norway", "Oslo" ); printf( "%s\n", dict["Norway"] ); printf( "%s\n", dict["FRANCE"] ); printf( "%s\n", dict["russia"] ); if ( !dict["Italy"] ) printf( "Italy not defined\n" ); } Program output: Oslo Paris Moscow Italy not defined The dictionary in our example maps char* keys to char* items. Note that the mapping is case insensitive (specified in the constructor). QAsciiDict implements the [] operator to lookup an item. QAsciiD. The item is inserted before the first bucket in the list of buckets. Looking up an item is normally very fast. The key is again hashed to an array index. Then QAsciiDdict.h> #include <stdio.h> void main() { // Creates a dictionary that maps char* ==> char* (case sensitive) QAsciiDict<char> dict; dict.insert( "Germany", "Berlin" ); dict.insert( "Germany", "Bonn" ); printf( "%s\n", dict["Germany"] ); dict.remove( "Germany" ); // Oct 3rd 1990 printf( "%s\n", dict["Germany"] ); } Program output: Bonn Berlin The QAsciiDAsciiDict's default implementation is to delete the item if auto-deletion is enabled. See also QAsciiDictIterator, QDict, QIntDict, QPtrDict and Collection Classes Constructs a copy of dict. Each item in dict are inserted into this dictionary. Only the pointers are copied (shallow copy). Constructs a dictionary with the following properties: Arguments: Setting caseSensitive to TRUE will treat "abc" and "Abc" as different keys. Setting it to FALSE will make the dictionary ignore case. Case insensitive comparison includes only the 26 standard letters A..Z, not French or German accents or Scandinavian letters. Setting copyKeys to TRUE will make the dictionary copy the key when an item is inserted. Setting it to FALSE will make the dictionary only use the pointer to the key. Removes all items from the dictionary and destroys it. All iterators that access this dictionary will be reset. See also setAutoDelete(). [virtual] Removes all items from the dictionary. The removed items are deleted if auto-deletion is enabled. All dictionary iterators that operate on dictionaryAsciiD last inserted of these will be taken. Returns a pointer to the item taken out, or null if the key does not exist in the dictionary. All dictionary iterators that refer to the taken item will be set to point to the next item in the dictionary traversal order. See also remove(), clear() and setAutoDelete(). Search the documentation, FAQ, qt-interest archive and more (uses): This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved.
http://doc.trolltech.com/2.3/qasciidict.html
crawl-002
refinedweb
536
60.61
How to access BOWImgDescriptorExctractor in java? I am trying to do a CvNormalBayesClassifier and I need a BOWImgDescriptorExctractor object. I cannot find it to import. Is it build in opencv_java249.jar? I have seen that I have to import com.googlecode.javacv.cpp.opencv_features2d.BOWImgDescriptorExtractor but it does not find something like this. I have build OpenCV with CMake and I have created the opencv_java249.jar and the libopencv_java249.so file. I have added them to my java project (based on grails, if this matters) and I have not managed to find the BOWImgDescriptorExtractor class. Help, please "Is it build in opencv_java249.jar?" - no.
https://answers.opencv.org/question/40424/how-to-access-bowimgdescriptorexctractor-in-java/
CC-MAIN-2019-47
refinedweb
104
61.83
For me software layering is one of the fundamental pillars of physical software design. Whether we recognise it or not, all software is layered. In simplest terms layering is the process of building source code on top of source code. A more considered approach recognises the process. Like a brick built house, rows of layers (bricks) are placed on those below, each layer has well defined boundaries; the lower boundary is the dependencies on lower layers, the upper boundary is the interface it exposes to the layers above. The concept of layering is tied up with the concepts of modules because layers are built from modules. More formally: A layer is one or more modules of a system that do not depend on one another. Layering is a physical mechanism for building software modules on top of existing modules. A layer will depend on layers below it, but should never depend on a layer above. Layering is transitive downwards: if layer 3 depends on 2 that depends on 1, then 3 also depends on 1. A module is mapped from the logical to the physical world using a namespace in C++ and a package in Java. (If your compiler does not yet support namespace or you are using some other language, you can, and should, apply the same concepts, but you will need a little more discipline.) A module is not always a layer but frequently is. When a module as the smallest piece of reusability layering is important in understanding dependencies. Layering dependencies are always in a downward direction; an upward dependency opens a circular dependency because of the transitivity. The lower layers (OS, C library, C++ library) are usually taken for granted: in UNIX systems it can be hard to define the line between the C library and the OS library. These layers represent the foundations upon which we will build our programs. These modules represent a good model of what we should aim to achieve: namely, they are not dependent on our code and the interfaces are well defined. Once again the main reference for layering is John Lakos. (This cannot be the only book which talks in depth about physical software design but most literature on software design deals with logical rather than physical design.) Lakos likes to number layers, starting at zero for the OS libraries and working upwards. Although this helps with producing a theoretical description of layers, in practise it can cause documentation problems because layers are added, removed and modules split. A module provides an abstract representation of some area of functionality. Like an iceberg, only a small part is visible (the header files, the library file) while most of the module is below the surface in the source code files which are not exposed to the outside world. This carries over to a layer. Frequently a layer is just one module, lets call it M1. Other modules use this module, they are said to be layered on top. However, a layer may contain modules M1, M2 and M3. As long as M1, M2 and M3 are in no way dependent on one another they can co-habit in one layer, say L1. But if M2 and M3 in some way depend on M1 (i.e. they use something in M1) then M2 and M3 are in a layer above M1, now we have L1 containing M1, while the next layer, L2, contains modules M2 and M3. Within a module and a layer, it is natural to find classes that depend on other classes. This does not imply more layers because these dependencies are hidden below the surface, so the users of the module do not need to know about the dependency. (You may still find it useful to think of internal layers.) Every module exposes an interface. This should be small enough to perform its intended role without confusing the user. It should be well documented. The module should have some central concept, e.g. database abstraction or inter-process communication. It is important that this central concept be clearly stated and respected by the module. It would not do for a database layer to also contain functions to control a robot-arm. This makes the module confusing and will detract from its usefulness. One exception here is the idea of a utilities module set out in the first example. Within such a module you may find diverse functions such as string manipulation, complex numbers, etc. This is an issue of practicality, the utility module effectively contains a lot of small modules; it is easier to manage one such module than a multitude of small modules. The defining concept is: a collection of utilities. If one area of functionality starts to stand out it is time to split the module in two (or more) that may be independent of each other or layered. A layer may not have a central concept because it can hold several modules which have different central concepts. Each layer is a physical grouping while a module is a logical grouping. Recognise which layers are above and below your layer. If you do not need to be dependent on a layer then do not be, additional dependencies detracts from flexibility and hence maintainability and re-use. The physical manifestation of a layer may be difficult to see in raw source code because dependencies are not always clear. Lakos' numbering scheme helps enumerate the layers of a system but makes documentation quickly dated, as for source code it would mean identifying which layer every file was in and because layers may be inserted and removed keeping the comments up to date would be impossible. In my previous article on include files I suggested you divide your header files between: system, project and local, this is a basic form of layering. One system I worked on in the last few years had a layering model which looked a bit like figure 2. Just about any system we build will have the OS libraries at the bottom. On top of this will come the libraries of our language. As such the lowest two levels of any system are fixed. Many projects, organisations and even individual developers have a set of common utilities which are used again and again, e.g. IP translation, string conversions, etc. In many cases these are syntactic sugar on some feature the OS or language provides but we choose to wrapper it in some fashion for whatever reason. This utilities layer often becomes as much a part of our foundation as the language libraries. In this particular system there are two modules sharing one layer: GUI utilities and IPC utilities. These two modules have no dependency upon one another but are both dependent on our general utilities, higher layers depend on both modules although they need not. While we build layer upon layer it is good to only build upon the layers we need to. Suppose instead that the IPC layer depended on the GUI layer: this would make the IPC layer less flexible because it cannot be used without the GUI layer; it also becomes vulnerable to changes in the GUI layer. The top layer in almost any system is always the same: the application layer. The layer that implements our business application. I say almost because there are two obvious exceptions: We may be building a product which exposes an API which is used to build the final application, e.g. I am sure Sybase has many layers of source code below ct-lib, but ct-lib is not an application in itself. There may be further layers on top of the application, e.g. the application may be free of user interface which is added as a presentation layer on top, imagine a application which exposes a graphical interface for PCs and a text interface for mainframes. Downward transitivity between layers is taken as given in a dependency diagram like the one above. Although some may choose to show the application layer depending directly on the utilities this quickly leads to excessive complexity in a diagram. Sometimes, as in the next example, you may want a higher layer not to see a lower layer: physically there is a dependency but you can logically hide it through program syntax and design constructs such as abstract interfaces. On another project we faced the challenge of porting a Sybase based application to Oracle and Microsoft SQL Server. The solution was an entire database layer. The interface to this layer was loosely based on the Sybase db-lib interface to simplify the porting. Below this interface the layer allowed drivers for Oracle, Sybase and Microsoft databases to be plugged in. Once the interface and Sybase driver was complete we where able to "jack up" the application and slip our database layer below the application which was reworked to access the database layer rather than direct to Sybase. At this point the app worked fine, we where able to develop Microsoft and Oracle drivers while the layer was in use. With this example I am deliberately trying to draw parallels to the way buildings are constructed - the standard libraries are the foundations, next comes our own concrete base then the floors of our building. The ground floor holds up the first floor and so on. The layer is good because it abstracts and simplifies. It allows different areas of the system to be worked on at the same time while the interface boundaries are respected. In the above example work continued on the main application while the database layer was being added. However, abstraction can lead to problems precisely because details are hidden. In the database example all the layering did not save us from having to re-work the SQL contained in the main application. This still had to be reduced to a common subset for all three databases. Anyone maintaining the system could not forget this, although there was an abstraction in place which made the system portable it still imposed constrains on developers. If these constraints where not public they could lead to problems. It is becoming increasingly common to talk about an Application domain and a Solution Domain - see Ian Bruntlett and Jim Coplien. The application domain is what used to be called Business Logic while the solution domain details the elements that we will use to solve a problem, e.g. Windows 2000 with a Sybase database, or AIX with an Oracle database. In general the solution domain comprises the lower levels of the system and aims to abstract some of the complexities away allowing us to develop the business logic free from considerations of OS quirks, or database. In truth it's not quite this simple. We can abstract ourselves away from the underlying technologies without introducing inflexibility. In addition, there is aways the possibility of an Outside Context Problem occurring: the above database example would be rendered useless if faced with a request to port to an Object-Oriented database such as Versant. The module interface should be provided by one or, more usually, several header files. As these are related to a namespace place them in a separate, publicly accessible, directory of their own. Most importantly the module must be presented for use by packaging it. Typically this is done with a static library (.lib) or, a shared/dynamic library (.so in UNIX/.dll in Windows). The module forms a collection of files that are compiled and packaged together. Personally, I prefer static link libraries to dynamic libraries. For the following reasons: Although dynamic libraries once represented a way to save memory, this is usually less of a consideration today. Static linking ensures we have no undefined symbols, or mis-matches at compile time not run time. Start up times for applications with many dynamic libraries to load can be longer than a "fat" binary. With a dynamic library it's possible that the wrong one will be loaded (most Windows programmers have suffered this at some time). Dynamic linking means we need to distribute and version control a separate customer deliverable. (This may be an advantage and a powerful way to cope with some variances.) The module and hence the layer should have clearly defined dependencies, you should be able to diagram these dependencies in an acyclic graph. If module X depends on Y, which depends on Z which depends on X you have a problem. A change to X may ripple to Y, which may ripple to Z, which... Perhaps one of the greatest advantages of dividing your system into modules and layers is that you form firewalls between these layers where such dependency problems will become obvious. It is not uncommon for a module to be applicable to more than one project, knowing what the dependencies are increases the chances it will work in the next project as is. The most important thing about modules and layers is to recognise where they exist. Decide what your modules and layers are early in your physical design, do not be afraid to add new modules and layers, and remove others: layer early - layer often. If you can continue to add (and remove) modules you are demonstrating that your system has flexibility. What you should avoid at all times are cyclic dependencies. Although a linker may pick this up it may not, and what one linker accepts may be rejected by another. If you have lots of dynamic libraries you may not see the problem at all. When devising modules and mapping layers my guiding principals is: a place for everything, and everything in its place. Large Scale C++ Software Design, John Lakos, Addison-Wesley, 1996 Overload 39, September 2000. Multi-Paradigm design in C++, Jim Coplien, 2000. Thanks to Ian Bruntlett (Overload 39) for describing how the Outside Context Problem of Iain M. Banks Excession novel (1996) is applicable to the software world. Kevlin Henney, Overload 39.
https://accu.org/index.php/journals/460
CC-MAIN-2020-29
refinedweb
2,328
61.97
On Fri, Oct 08, 2004 at 10:55:44AM +0900, GOTO Masanori wrote: > Hi, > > > It would be nice if glibc reset its internal state if when optind is set > > to 1, as well as when optind is set to 0. Historically, the mechanism > > which worked across BSD, AT&T, and most commercial unix systems derived > > from the same require that getopt() be reset by setting optind to 1. > > Setting optind to 0 causes getopt() to misbehave on a number of > > platforms, most notably and most recently Mac OS X, but probably on > > other BSD-derived systems. > > > > On the other hand, setting optind to 1 in an attempt to reset things > > causes glibc to core dump under some conditions. > > > > The lack of interoperability on this point means that I was forced to > > work around the bug in a particularly ugly fashion, as shown below. > > It would be really nice if I didn't need this kind of kludgery. > > Glibc defines optind = 1 in initial state in 2.3.2.ds1-17. POSIX says > it should be 1 before any calls. Could you provide us more > information or an example to reappear this report? The problem is what happens if you want to use getopt() multiple times to parse multiple sets of argc/argv arguments. For example, e2fsprogs's debugfs program uses getopt to parse each command given to debugfs: # debugfs /dev/hda3 debugfs: stat / ... debugfs: clri <2> ... debugfs: quit In this case, "stat /", "clri <2>", and "quit" are all parsed using getopt(), as well as being the command-line arguments to debugfs. Unfortunately, gnu glibc has extra allocated memory which is used to support long options (i.e., --help). This information is only reset when you set optind=0 before calling getopt() to parse a new set of argv[] vectors. If you do not reset optind to 0 before parsing a new set of argv[] vectors, getopt will core dump. Unfortunately, on BSD systems, you have to set optind=1 before parsing a new set of argv[] vectors, or BSD systems will core dump. To reproduce, recompile e2fsprogs and remove the magic GLIBC branch below, and if you always reinitialize optind=1 in reset_getopt(), you will watch getopt core dump rather spectacularly after executing a number of debugfs commands with and without options. - Ted /* * This function resets the libc getopt() function, which keeps * internal state. Bad design! Stupid libc API designers! No * biscuit! * * BSD-derived getopt() functions require that optind be reset to 1 in * order to reset getopt() state. This used to be generally accepted * way of resetting getopt(). However, glibc's getopt() * has additional getopt() state beyond optind, and requires that * optind be set zero to reset its state. So the unfortunate state of * affairs is that BSD-derived versions of getopt() misbehave if * Optind is set to 0 in order to reset getopt(), and glibc's getopt() * will core ump if optind is set 1 in order to reset getopt(). * * More modern versions of BSD require that optreset be set to 1 in * order to reset getopt(). Sigh. Standards, anyone? * * We hide the hair here. */ void reset_getopt(void) { #ifdef __GLIBC__ optind = 0; #else optind = 1; #endif #ifdef HAVE_OPTRESET optreset = 1; /* Makes BSD getopt happy */ #endif }
http://lists.debian.org/debian-glibc/2004/10/msg00086.html
CC-MAIN-2013-48
refinedweb
537
69.01
One correct, but the implication is that developers will have to add <xsl:if> tags in their XSLT transformations around each and every element where this may cause a problem. This article will show how using a small and generic Java class and the “Java Callout” feature of the 11g Mediator, we can automatically “fix” these empty elements so that they will not cause problems with schema validation any more. First, a quick recap about what we’ld need to do to prevent this problem from occuring in the first place. There are basically two ways. The first is to add <xsl:if> statements around all these elements in all XSLT transformations: <xsl:if <inp1:resultDate> <xsl:value-of </inp1:resultDate> </xsl:if> This will prevent empty elements from occuring in the output message. The other option is to use the xsi:nil attribute from the Schema Instance namespace. For this to work, you’ld have to set the nillable=”true” attribute in the Schema definitions of these elements, and then you’ld have to map them like this to cover all possible cases: <xsl:if <inp1:resultDate> <xsl:if <xsl:attribute <xsl:value-of </xsl:attribute> </xsl:if> <xsl:value-of </inp1:resultDate> </xsl:if> Of course, the JDeveloper will generate this code for you in the graphical XLST Map Editor, when you use the following Automap setting: But still, that’s a lot of mapping logic for a single element, and you can’t always use automap! There’s a lot to be said for a mechanism that automatically deletes, or “xsi:nillifies”, all empty elements “on the way out”, without having to worry too much about them before that. In BPEL, I’ve often used a generic XSLT transformation at the end of the flow to do just that. But with the Mediator, that is not so easy to do, since you only have one transformation (in each direction) available to you. However, Mediator 11g has a “Java Callout” property that you can define against each operation, which will allow you to program a Java class that gets called, among others, after all the transformations are done. An excellent discussion of this feature can be found in this blog entry by Lucas Jellema. For this purpose, I have written a simple Java Callout class, that adds the xsi:nil=”true” attribute automatically on ALL empty elements in a document. It is invoked at the very end of the Routing Rule, just before the response is sent back to the client. The code is below: All you need to do to invoke this logic, is to add this class to your SOA project (you can also put it in a jar and add it to the classpath of both JDeveloper and the WLS server), and add it to the “Java Callout” property of your Mediator components: And that’s all there’s to it. There’s one thing to consider, though. You will NOT see the result of the Java Callout anywhere in the audit trail in the console! This is the screenshot of the instance details of a simple Mediator component I used for testing: However, when you look at the XML that was returned for this test instance, you DO see the xsi:nil=”true” attributes: The code above is easily modified to delete the empty elements instead of adding the xsi:nil attribute to them. Happy programming! Its really a good demo regarding to transformation….. if possible send me some more content of SOA .
https://technology.amis.nl/2010/03/18/java-callout-in-mediator-to-automatically-deal-with-empty-elements-2/
CC-MAIN-2015-32
refinedweb
589
51.92
The Q3DateTimeEdit class combines a Q3DateEdit and Q3TimeEdit widget into a single widget for editing datetimes. More... #include <Q3DateTimeEdit> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. The Q3DateTimeEdit class combines a Q3DateEdit and Q3TimeEdit widget into a single widget for editing datetimes. Q3DateTimeEdit consists of a Q3DateEdit and Q333DateEdit::setOrder() to change this. Times appear in the order hours, minutes, seconds using the 24 hour clock. It is recommended that the Q3DateTimeEdit is initialised with a datetime, e.g. Q3DateTimeEdit *dateTimeEdit = new Q3DateTimeEdit(QDateTime::currentDateTime(), this); dateTimeEdit->dateEdit()->setRange(QDateTime::currentDate(), QDateTime::currentDate().addDays(7)); Here we've created a new Q3DateTimeEdit set to the current date and time, and set the date to have a minimum date of now and a maximum date of a week from now. Terminology: A Q3DateEdit widget consists of three 'sections', one each for the year, month and day. Similarly a Q3TimeEdit consists of three sections, one each for the hour, minute and second. The character that separates each date section is specified with setDateSeparator(); similarly setTimeSeparator() is used for the time sections. See also Q3DateEdit and Q3TimeEdit. This property holds the editor's datetime value. The datetime edit's datetime which may be an invalid datetime. Access functions: Constructs an empty datetime edit with parent parent and called name. This is an overloaded function. Constructs a datetime edit with the initial value datetime, parent parent and called name. Destroys the object and frees any allocated resources. Returns true if auto-advance is enabled, otherwise returns false. See also setAutoAdvance(). Returns the internal widget used for editing the date part of the datetime. Reimplemented from QWidget::minimumSizeHint(). Reimplemented from QWidget::resizeEvent(). Intercepts and handles the resize event, which hase a special meaning for the Q3DateTimeEdit. Sets the auto advance property of the editor to advance. If set to true, the editor will automatically advance focus to the next date or time section if the user has completed a section. See also autoAdvance(). Reimplemented from QWidget::sizeHint(). Returns the internal widget used for editing the time part of the datetime. This signal is emitted every time the date or time changes. The datetime argument is the new datetime.
http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/q3datetimeedit.html
CC-MAIN-2014-10
refinedweb
392
51.85
generated BCrypt salts have incorrect padding (Imported from Google Code) Steps to reproduce... As reported by Johan , most (but not all) bcrypt hashes generated by Passlib ( --- Diagnosis... After some examination, the problem is caused by the padding bits at the end of the bcrypt salt, and how passlib generates bcrypt salts... BCrypt uses a base64 variant to encode it's salt and digest data, encoding 6 bits per character. Due to the 16 byte raw salt, the last character of the encoded salt string only encodes 2 bits of data, the other 4 bits are unused "padding". Passlib preserves these padding bits as provided, and when verifying hashes, compares the digest characters, not the entire string. Whereas OpenBSD os_crypt() and pybcrypt hashpw() both set the padding bits to 0 when they return a result, and use logic equivalent to os_crypt(password, hash) == hash when verifying passwords. Thus if they are passed a hash with the padding bits set, it will silently fail to verify against *anything*. This issue wouldn't have been revealed if it weren't for the fact that Passlib uses a simple getrandstr() call to generate encoded bcrypt salts, instead of generating raw bytes and encoding them. 15 out of 16 times, this causes Passlib to generate a bcrypt salt which has the padding bits set, which therefore won't work if passed to OpenBSD or pbcrypt. Thus causing the observed behavior of most (but not all) hashes failing. For example, the incorrect hash above should have the last salt char changed: wrong: $2a$12$oaQbBqq8JnSM1NHRPQGXORm4GCUMqp7meTnkft4zgSnrbhoKdDV0C right: $2a$12$oaQbBqq8JnSM1NHRPQGXOOm4GCUMqp7meTnkft4zgSnrbhoKdDV0C ^ --- Severity... 1. Affects most bcrypt hashes generated by passlib, but only an issue if verification done through library other than passlib. 2. Not a security risk. 3. Should be fixed asap, don't want more of these out there mucking things up for people. --- Temporary fix until issue is resolved... The following function should clear the padding bits of existing hashes, fixing any hashed already created by passlib: BCHARS = './ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789' BCLAST = '.Oeu' def cleansalt(hash): "ensure 4 padding bits at end of bcrypt salt are always 0" if hash.startswith("$2") and h[28] not in BCLAST: hash = hash[:28] + BCHARS[BCHARS.index(hash[28]) & 15] + hash[29:] return hash The following monkeypatch should fix passlib's salt generation until issue is resolved.... from passlib.hash import bcrypt from passlib.utils import rng, getrandstr BCHARS = './ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789' BCLAST = '.Oeu' def gensalt(salt_size=None, strict=False): return getrandstr(rng, BCHARS, 21) + getrandstr(rng, BCLAST, 1) bcrypt.generate_salt = staticmethod(gensalt) --- (Tentative plans for a) Permanent fix... While some of these changes are departure from Passlib's past behavior of preserving the padding bits, they should allow existing hashes to verify correctly in Passlib, while ensuring Passlib's output is *always* compatible with the de facto standard (OpenBSD). Release Passlib 1.5.3, containing the following changes: 1. Change passlib.hash.bcrypt to generate salts with padding bits set to 0. This should cause genconfig() and encrypt() to generate OpenBSD compatible hashes from this release onward. 2. Change passlib.hash.bcrypt to clear the padding bits when parsing a hash. This should cause genhash() to generate OpenBSD compatible hashes even if the config string is not compatible. 2b. When padding bits are cleared per item 2, norm_salt() should issue a warning. This warning should alert users that they have an unclean hash which they may want to re-encrypt or clean, if they plan to verify the hash using a library other than passlib. 3i. add unittest to verify salts returned by genconfig() and encrypt() are clean. 3ii. add unittest to verify generated hashes directly using os_crypt/pbcrypt, just to prevent recurrence of any bugs like this one. 3iii. add the submitted test vectors, make sure genhash() outputs "cleaned" result per item 2, and that verify() still works against unclean hashes. 4. make note of all this in the "deviations" in the documentation. XXX: item 2 would also cause passlib.hash.bcrypt.from_string(hash).to_string() to "clean" existing hashes, need to verify such normalization should be the expected behavior from that operation. XXX: it would be nice if verify_and_update() could take care of automatically cleaning hashes (per item 2b), but that should wait until a major release. (Imported from Google Code) Fixed in release 1.5.3. re: first XXX: decided item 2's behavior was acceptable for now, will deprecate warning in favor of ValueError later. re: second XXX: hacked a solution into hash_needs_update() for now.
https://bitbucket.org/ecollins/passlib/issues/25
CC-MAIN-2019-13
refinedweb
746
55.03
The brand of the car is a BMW. The color of the car is silver. The mileage of the car is 25000 miles. The gas tank is full. You(don't want) to fill up the gas tank. But here is what I get: I know my problem is in this line: output.write(x +System.getProperty("line.separator")); I'm not sure what I'm supposed to put instead of x. I have tried thelist and the same output comes out. Any help or advice would be greatly appreciated!! import java.io.BufferedWriter; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.io.Writer; public class P10 { public static void main (String args[])throws IOException { Info [] thelist = new Info [10]; Mazda m = new Mazda(); Ford f = new Ford(); Kia k = new Kia (); Dodge d = new Dodge(); Mercedes r = new Mercedes (); Chrysler c = new Chrysler (); Jeep j = new Jeep(); Hyundai h = new Hyundai(); Jaguar g = new Jaguar (); BMW b= new BMW (); thelist[0] = m; thelist[1] = f; thelist[2] = k; thelist[3] = d; thelist[4] = r; thelist[5] = c; thelist[6] = j; thelist[7] = h; thelist[8] = g; thelist[9] = b; try { Writer output = null; String text = "This is the information on the cars:";//text has been assigned File i = new File("Information.txt");//name of txt file output = new BufferedWriter(new FileWriter(i)); output.write(text);//text is output System.out.println("Your file has been written to " + (i.getName()+ ".")); for (Info x: thelist){ x.Brand(); x.Color(); x.Mileage(); x.Gas(); x.Answer(); System.out.println(); output.write(System.getProperty("line.separator"));//blank space output.write(x +System.getProperty("line.separator"));//output is the array of car objects }//ends for output.close();//ends output }//ends try catch (IOException e) {// if there is no file System.err.println("The file was not saved.");//error message e.printStackTrace(); }//ends catch } }
https://www.dreamincode.net/forums/topic/177528-wrting-an-array-of-objects-to-text-file/
CC-MAIN-2018-47
refinedweb
315
69.48
Design Principles - If only several simple APIs of another big library are needed, try to extract them and add them to your program instead of depending on the whole library whenever possible. (Beware of license) - Only use libraries from other desktops when they are small or efficient enough and have few dependencies. - Only create a daemon if there is a really good reason. - are used to it. Trying to fight your users is apparently unwise. - Try to shorten the startup time since this greatly affects Coding Styles Currently, coding style of all lxde programs are different. However it's good for a project to have consistent coding style in all the programs. Personally, my coding style is like this: #include "some-header.h" void function() { if( value ) { char str[10]; for( i = 0; i < 10; ++i ) { do_something(); str[i] = '0' + i; } str[i] = '\0'; } else if( value2 ) { do_other_things( param1, param2, param3 ); } else { do_nothing(); } }. Some rules are not strictly followed, either. If anyone found those rules quite annoying, the project is open for discussion. Anyways, we can use some formatting tools such as astyle to reformat all the source code.
https://wiki.lxde.org/en/Design_Principles
CC-MAIN-2017-34
refinedweb
189
64.41
Check Out Windows 8 : I wanna be commercial Ad Watch @ Asprangers.com Year: 2012 Meet Steve B. See his Windows Phone 8 Really liked this awesome video .Check out Steve Ballmer’s WPhone 8 Read More @ ASPRangers Reading the return value from Main() in VbScript,Batch File,PowerShell Script Main() method in C# can have either (void or int) as the return value Void: namespace TestSolution Read More @ AspRangers What’s in color: Yellow, Blue , Gray color USB Ports on Lenovo I recently got Lenovo X220 laptop . I always kept wondering about the different colors of USB ports it supplied. What does each color signifies ? Once I found out meaning for each port color. Thought it would be worth sharing with you Read More @ Asprangers ASP.NET 4 – Exploring the inbuilt Chart Control in VS 2010 ASP.NET 4 introduces the inbuilt charting controls for the Web Forms with Visual Studio 2010. These controls were available as a separate download and can be used with Visual Studio 2008. Here are the steps to add the chart to your application: Add the Chart control to you page The Chart control is… To check whether a file is in use or not in .Net program ‘C:\Inetpub\XXX.jpg’ because it is being used by another process.at… Samples for Programmatically Accessing / Scripting /Automation of Administrative Tasks in IIS Sometime back I reviewed the few of the sample for automating task in IIS (6, 7.x) using MWA, WMI , ADSI. Here are the links to download the sample code (C#, VB.net) from OneCode site: Admin IIS7.x using MWA (C#) Read More @ ASPRangers… Free E-Book Gallery for Microsoft Technologies ASP.NET, Office, SQL Server, Windows Azure, SharePoint Server You can download (either in EPUB | MOBI | PDF format ) e-book on Microsoft technologies( ASP.NET, Office, SQL Server, Windows Azure, SharePoint Server and other ). These e-books are free. Read More Enjoy Difference between Windows Server 2008 and Windows Server 2008 R2 OS Yesterday my senior asked me what is difference between Windows 2008 and Windows 2008 R2 servers. Thought to share answer with you: Windows Server 2008 is the same codebase bits as Vista. It is available in two flavors 32 bit and 64 bit versions. Windows Server 2008 R2 is the same codebase bits as Windows 7…
https://blogs.msdn.microsoft.com/jaskis/?m=20123
CC-MAIN-2016-30
refinedweb
388
73.98
Meta Message Types¶ Supported Messages¶ sequence_number (0x00)¶ Sequence number in type 0 and 1 MIDI files; pattern number in type 2 MIDI files. marker (0x06)¶ Marks a point of interest in a MIDI file. Can be used as the marker for the beginning of a verse, solo, etc. set_tempo (0x51)¶ Tempo is in microseconds per beat (quarter note). You can use bpm2tempo() and tempo2bpm() to convert to and from beats per minute. Note that tempo2bpm() may return a floating point number. time_signature (0x58)¶ Time signature of: 4/4 : MetaMessage(‘time_signature’, numerator=4, denominator=4) 3/8 : MetaMessage(‘time_signature’, numerator=3, denominator=8) Note From 1.2.9 time signature message have the correct default value of 4/4. In earlier versions the default value was 2/4 due to a typo in the code. key_signature (0x59)¶ Valid values: A A#m Ab Abm Am B Bb Bbm Bm C C# C#m Cb Cm D D#m Db Dm E Eb Ebm Em F F# F#m Fm G G#m Gb Gm Note: the mode attribute was removed in 1.1.5. Instead, an ‘m’ is appended to minor keys. Unknown Meta Messages¶ Unknown meta messages will be returned as UnknownMetaMessage objects, with type set to unknown_meta. The messages are saved back to the file exactly as they came out. Code that depends on UnknownMetaMessage may break if the message in question is ever implemented, so it’s best to only use these to learn about the format of the new message and then implement it as described below. UnknownMetaMessage have two attributes: ``type_byte`` - a byte which uniquely identifies this message type ``data`` - the message data as a list of bytes These are also visible in the repr() string: UnknownMetaMessage(type_byte=251, data=(1, 2, 3), time=0> Implementing New Meta Messages¶ If you come across a meta message which is not implemented, or you want to use a custom meta message, you can add it by writing a new meta message spec: from mido.midifiles.meta import MetaSpec, add_meta_spec class MetaSpec_light_color(MetaSpec): type_byte = 0xf0 attributes = ['r', 'g', 'b'] defaults = [0, 0, 0] def decode(self, message, data): # Interpret the data bytes and assign them to attributes. (message.r, message.g, message.b) = data def encode(self, message): # Encode attributes to data bytes and # return them as a list of ints. return [message.r, message.g, message.b] def check(self, name, value): # (Optional) # This is called when the user assigns # to an attribute. You can use this for # type and value checking. (Name checking # is already done. # # If this method is left out, no type and # value checking will be done. if not isinstance(value, int): raise TypeError('{} must be an integer'.format(name)) if not 0 <= value <= 255: raise TypeError('{} must be in range 0..255'.format(name)) Then you can add your new message type with: add_meta_spec(MetaSpec_light_color) and create messages in the usual way: >>> from mido import MetaMessage >>> MetaMessage('light_color', r=120, g=60, b=10) MetaMessage('light_color', r=120, g=60, b=10, time=0) and the new message type will now work when reading and writing MIDI files. Some additional functions are available: encode_string(unicode_string) decode_string(byte_list) These convert between a Unicode string and a list of bytes using the current character set in the file. If your message contains only one string with the attribute name text or name, you can subclass from one of the existing messages with these attributes, for example: class MetaSpec_copyright(MetaSpec_text): type_byte = 0x02 class MetaSpec_instrument_name(MetaSpec_track_name): type_byte = 0x04 This allows you to skip everything but type_byte, since the rest is inherited. See the existing MetaSpec classes for further examples.
https://mido.readthedocs.io/en/latest/meta_message_types.html
CC-MAIN-2022-33
refinedweb
612
59.03
I'm using getline to receive some characters but it skips out on receiving some of the data. In detail the program skips out asking for the course code the 1st time. Here is the code: Your assistance would be greatly appreciated. Thanks in advance!!Your assistance would be greatly appreciated. Thanks in advance!!Code:// This function is to be used to get the courses completed at the end of each semester and save the information to a file (includes course code, course name, letter grade. #include <iostream> #include <fstream> #include <string> using namespace std; int main() { ofstream myfile; int numCourse; char courseCode[20]; char courseName[20]; char letterGrade[5]; myfile.open ("completed.txt"); cout<<"How many courses have you done this semester?"<<endl; cin>>numCourse; for(int i=0;i < numCourse;i++) { cout<<"Please enter course code"<<endl; cin.getline(courseCode,20); myfile<<courseCode[20]<<" "; cout<<"Please enter course name"<<endl; cin.getline(courseName,20); myfile<<courseName[20]<<" "; cout<<"Please enter your letter grade"<<endl; cin.getline(letterGrade,5); myfile<<letterGrade[5]<<" \n"; } myfile.close(); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/109428-getine-command-skipping-help.html
CC-MAIN-2016-30
refinedweb
178
61.22
Posted on 2007-04-27, last updated 2013-05-05 by Timo Bingmann at Permlink. The B+ tree source package contains a speedtest program which compares the libstdc++ STL red-black tree with the implemented B+ tree with many different parameters. The newer STL hash table container from the __gnu_cxx namespace and the TR1 hash table tr1::unordered_map n random integers into the tree / hash table. The second test first inserts n random integers, then performs n lookups for those integers and finally erases all n integers. The last test only performs n lookups on a tree pre-filled with n integers. All lookups are successful. These three test sequences are preformed for n from 125 to 4,096,000 / 32,768,000 or 65,536,000 where n is doubled after each test run. For each n the test procedure is repeated until at least one second execution time elapses during the repeated cycle. This way the measured speed for small and 2013 test, only every other slot size is actually tested. Three test results are included in the 0.9 tarball: one done in 2007 with version 0.7, another done in 2011 with version 0.8.6, and the third in 2013 with version 0.9, mainly to verify the speed gains from removing binary search. The speed increase between 0.9 and 0.8.6 is discussed in a separate blog post. The speed test source code was compiled with g++ 4.1.2 -O3 -fomit-frame-pointer. or by clicking one the plots:
http://panthema.net/2007/stx-btree/speedtest/
CC-MAIN-2018-47
refinedweb
258
76.52
This action might not be possible to undo. Are you sure you want to continue? 10/31/2011 text original 38 ©Ian Sommerville 2000 Key (Key) 17 0 17 45 23 25 Output (Found. 33. ?? true. 23 41. 6th edition. 30. 16. 1 true. L) true. 1 false. 23. 31. 4 false.Testing Search routine . 41. Knowledge of the program is used to identify additional test cases qObjective is to exercise all program statements (not all path combinations) qDerivation White-box testing 4 . 45 17. 29. 29. middle and last elements of the sequence are accessed qTest with sequences of zero length Search routine . Chapter 20 Structural testing qSometime called white-box testing of test cases according to program structure. 18. 7 true. 23. 21. ?? Slide 19 Software Engineering. 29. 21. 38 21. 9. 18. return . key element in array qPre-conditions satisfied. else top = mid 1 .length 1 . int [] elemArray. Result r ) { int bottom = 0 . r. } } //while loop } // search } //BinSearch Binary search (Java) Binary search . r.equiv. if (elemArray [mid] == key) { r. int top = elemArray. partitions 5 . partitions qPre-conditions satisfied. } // if part else { if (elemArray [mid] < key) bottom = mid + 1 .found = true . while ( bottom <= top ) { mid = (top + bottom) / 2 .index = mid . key element not in array qPre-conditions unsatisfied. key element in array qPre-conditions unsatisfied. int mid . r.found = false .index = 1 . key element not in array qInput array has a single value qInput array has an even number of values qInput array has an odd number of values Binary search equiv. 23. ?? true. 21. 31. 38. qAlthough all paths are executed. 29 9. 4 false. 23. 29.. Does not imply adequacy of testing. all combinations of paths are not executed qCyclomatic Binary search flow graph 6 . 18. 18. 23.test cases Binary search test cases Input array (T) 17 17 17. 21. 41. 23. 18. 1 false. 29. 29. 6th edition. Each branch is shown as a separate path and loops are shown by arrows looping back to the loop condition node qUsed as a basis for computing the cyclomatic complexity qCyclomatic complexity = Number of edges . 38 Key (Key) 17 0 17 45 23 21 23 25 Output (Found.Testing Equivalence class boundaries Elements < Mid Elements > Mid Midpoint Binary search . 16. 30.Number of nodes +2 Cyclomatic complexity qThe number of tests to test all control statements equals the cyclomatic complexity complexity equals number of conditions in a program qUseful if used with care. 33. L) true. 38 12. 23. 7 true. 21. 21. 41 17. 3 true. 4 true. ?? ©Ian Sommerville 2000 Software Engineering. 45 17. 33. 18. 32 21. 1 true. q1. 9 2. 2. 8. 7. 6. 8. 3. 5. 3. 2 q1. 3. .Testing 1 while bottom < = top bottom > top 2 3 if (elemArray [mid] == key 8 5 9 4 (if (elemArray [mid]< key 6 7 Binary search flow graph Independent paths q1. 2. 2. 7. 4. 3. 2 q1. 2. 4. 7. 4. 6. . . .Testing components by stubs where appropriate testing •Integrate individual components in levels until the complete system is created qIn practice. most integration involves a combination of these strategies qBottom-up Top-down testing Level 1 Testing sequence Level .Testing •Problems with both approaches.. Systems should not fail catastrophically.. misunderstanding. test all operations. Why test? • Gain confidence in the correctness of a part or a product. errors or invalid timing assumptions qTo test object classes. 12 . attributes and states qIntegrate object-oriented systems around clusters of objects 1. 2. What is testing? The act of checking if a part or a product performs as expected. qInterface defects arise because of specification misreading. Example: Correctness • Let P be a program (say. • Methods for testing various products are different. What to test? During software lifecycle several products are generated. • Examples: Test a requirements document using scenario construction and simulation Test a design document using simulation. • These are written in a formal programming language. or a collection of classes as in java. • For sort let S be: Sample Specification • P takes as input an integer N>0 and a sequence of N integers called elements of the sequence. Test all! • Each of these products needs testing. Values of environment variables are also included. • Let S denote the specification for P. Few basic terms Program: A collection of functions. • • 13 . 3. • Programs may be subsystems or complete systems. Oracle A function that determines whether or not the results of executing a program under test is as per the program’s specifications. Test a subsystem using functional testing. 6.Testing • Check if there are any errors in a part or a product. • There is a large collection of techniques and tools to test programs. Test case or test input A set of values of input variables of a program. as in C. What is our focus? • We focus on testing programs. an integer sort program). Test set Set of test inputs Program execution Execution of a program on a test input. 5. This might be formal or informal. Specification Description of requirements for a program. 0< K< (e-1) for some e • Let K denote any element of this sequence. Examples: Requirements document Design document Software subsystems Software system 4. } Failure • Incorrect program behavior due to a fault in the program. • Failure can be determined only with respect to a set of requirement specifications. 7. • The word bug is slang for fault.b). What is the sufficiency condition? Errors and failure • Inputs Error-revealing inputs cause failure Program Outputs Erroneous outputs indicate failure 8. Test-debug cycle 14 . Debugging • Suppose that a failure is detected during the testing of P. • The process of finding and removing the cause of this failure is known as debugging. Example: Incorrect code: if (a<b) {foo(a.} Correct code: if (a>b) {foo(a. faults Error: A mistake made by a programmer Example: Misunderstood the requirements. • A necessary condition for a failure to occur is that execution of the program force the erroneous portion of the program to be executed. • Testing usually leads to debugging • Testing and debugging usually happen in a cycle. Errors. defects.b). Defect/fault: Manifestation of an error in a program. Correctness again P is considered correct with respect to a specification S if and only if: For each valid input the output of P is in accordance with the specification S.Testing P sorts the input sequence in descending order and prints the sorted sequence. a telephone system must be able to handle 1000 calls over any 1-minute interval. each call must be processed in less than 5 seconds.Testing Test Yes Failure? Yes No Debug Done! 9. For example. Stress testing Clues come from “load” requirements. For example. Types of testing Testing complete? No Source of clues for test input construction Object under test Testing: based on source of test inputs • Functional testing/specification testing/black-box testing/conformance testing: Clues for test input generation come from requirements. What happens when the system is loaded or overloaded? Performance testing Clues come from performance requirements. • White-box testing/coverage testing/code-based testing Clues come from program text. Does the system process each call in less than 5 seconds? 15 . OO testing Clues come from the requirements and the design of an OO-program. 10. The goal is to test a program under scenarios not stipulated in the requirements. Subsystem testing Testing of a subsystem. • A successful test is one that uncovers an as-yet undiscovered error. Random testing Clues come from requirements. when testing for a communication protocol. • The Pareto principle applies to software testing. • 80% of all errors uncovered during testing will likely be traceable to 20% of all 16 . Unit testing Testing of a program unit.Testing Fault. and coding. design. Robustness testing Clues come from requirements. The major testing objective is to design tests that systematically uncover types of errors with minimum time and effort. A subsystem is a collection of units that cooperate to provide a part of system functionality Integration testing Testing of subsystems that are being integrated to form a larger subsystem or a complete system.or error. for example. A unit is the smallest testable piece of a program. • Tests should be planned long before testing begins. Software Testing Principles Davids [DAV95] suggests a set of testing principles: • All tests should be traceable to customer requirements. Regression testing Test a subsystem or a system on a subset of the set of existing test inputs to check if it continues to function correctly after changes have been made to an older version. Protocol testing Clues come from the specification of a protocol. As. • A good test case is one that has high probability of finding an undiscovered error. One or more units form a subsystem.based testing Clues come from the faults that are injected into the program text or are hypothesized to be in the program. Test are generated randomly using these clues. Software testing demonstrates that software function appear to be working according to specifications and performance requirements. Testing Objectives: Myers [MYE79] states a number of rules that can serve well as testing objectives: • Testing is a process of executing a program with the intent of finding an error. System testing Testing of a complete system. Software Testing Fundamentals Software testing is a critical element of software quality assurance and represents the ultimate review of specification. ” • Simplicity: “The less there is to test. the more quickly we can test it. To be most effective. • Exercise internal data structures to assure their validity. the fewer the disruptions to testing. data flows internal data structures. loop. external interfaces. • Can be used to derive a logical complexity measure for a procedure design. Exhaustive testing is not possible. Using white-box testing methods. design tests to exercise all internals of a software to make sure they operates according to specifications and designs Major focus: internal structures.Testing • • • program modules. we can more quickly isolate problems and perform smarter retesting.” • Stability: “The fewer the changes. Basic path testing (a white-box testing technique): • First proposed by TomMcCabe [MCC76]. Testing should begin “in the small” and progress toward testing “in the large”. • Execute all loops at their boundaries and within their operational bounds. etc.” • Controllability: “The better we can control the software. A set of program characteristics that lead to testable software: • Operability: “the better it works. It is a test case design method that uses the control structure of the procedural design to derive test cases. we derive test cases that • Guarantee that all independent paths within a module have been exercised at least once. operations.” • Understandability:”The more information we have. Software Testability According to James Bach: Software testability is simply how easily a computer program can be tested. also known as glass-box testing. the more the testing can be automated and optimized.” • Observability: “What you see is what you test. external data and information White-box testing: knowing the internals of a software. 17 . control flows. Major focus: functions. conditions. design tests to demonstrate each function and check its errors.” Test Case Design Two general software testing approaches: Black-Box Testing and White-Box Testing Black-box testing: knowing the specific functions of a software. the more efficiently it can be tested. testing should be conducted by an independent third party. logic paths.” • Decomposability: “By controlling the scope of testing. White-Box Testing and Basis Path Testing White-box testing. • Exercise all logical decisions on their true and false sides. the smarter we will test. • Used as a guide for defining a basis set of execution path. draw a corresponding flow graph. When this metric is used in the context of the basis path testing.N +2 where E is the number of flow graph edges and N is the number of flow graph nodes. for a flow graph G is defined as V(G) = E . Cyclomatic Complexity Cyclomatic complexity is a software metric -> provides a quantitative measure of the global complexity of a program. path 1: 1-2-10-11-13 path 2: 1-2-10-12-13 path 3: 1-2-3-10-11-13 path 4: 1-2-3-4-5-8-9-2-… path 5: 1-2-3-4-5-6-8-9-2-. Step 4: Prepare test cases that will force execution of each path in the basis set.. Equivalence Partitioning Equivalence partitioning is a black-box testing method • divide the input domain of a program into classes of data • derive test cases based on these partitions. or a Boolean condition Equivalence classes can be defined using the following guidelines: • If an input condition specifies a range. Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an input domain. Step 2: Determine the cyclomatic complexity of the resultant flow graph. • If an input condition requires a specific value. one valid and two invalid equivalence class are defined. Path 6: 1-2-3-4-5-6-7-8-9-2-.Testing • Guarantee to execute every statement in the program at least one time. • Cyclomatic complexity. 18 . V(G). Three ways to compute cyclomatic complexity: • The number of regions of the flow graph correspond to the cyclomatic complexity.. An equivalence class represents a set of valid or invalid states for input condition. Deriving Test Cases Step 1 : Using the design or code as a foundation. where k < i defined below. one valid and two invalid equivalence classes are defined. where 2 <= I <= 100 expected results: correct average based on k values and proper totals. For example. a range of values • a set of related values. An input condition is: • a specific numeric value. V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G. Path 1: test case: value (k) = valid input. • Cyclomatic complexity. Step 3: Determine a basis set of linearly independent paths. value (i) = -999. the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program. Example: Enumerate data E with input condition: {3. queue.a password nay or may not be present. 10. range . 102. one valid and one invalid classes are defined.element is inside array or the element is not inside array You can think about other data structures: –list. but never their absence. set . just above and below a and b. • If an input condition is Boolean. out-of-boundary search for element: . Values just above and below minimum and maximum are also tested. and tree Reviews = non-execution based testing Testing = execution based testing Dijkstra’s law of testing Program testing can be used to show the presence of bugs. 0 • If an input condition specifies a number values. Testing is hence the process of executing a program with the intent to produce failures. test cases should be developed to exercise the minimum and maximum numbers.containing commands noted before. command: input condition. 11. Boundary Value Analysis • a test case design technique • complements to equivalence partition Objective: Boundary value analysis leads to a selection of test cases that exercise bounding values. -2. input condition.six character string. Boolean . set. Examples: area code: input condition. 100.value defined between 200 and 900 password: input condition. 5. full element. 200. Example: Integer D with input condition [-3.the area code may or may not be present. 10]. If internal program data structures have prescribed boundaries. 102} test values: 3. test cases should be designed with value a and b.Testing If an input condition specifies a member of a set. one valid and one invalid equivalence classes are defined. • 19 . stack. -1. input condition. 5 • • Guidelines 1 and 2 are applied to output condition. Guidelines: • If an input condition specifies a range bounded by values a and b. value . Boolean . be certain to design a test case to exercise the data structure at its boundary Such as data structures: –array input condition: empty. single element. test values: -3. Note: may miss important aspects of the requirements (won’t test missing functionality).g. White-box or structural selection 20 .) –. It returns whether the triangle is equilateral. b.. 2} Consider the code: { if (x>0) pos=pos+1. green.q) p: [0. 2} The function `triangle’ takes 3 integer parameters that are interpreted as the lengths of the sides of a triangle. blue} calculate test values for p (interval rule) – correct values: {0. if (a==b && b==c) return(EQ). c. IN} type. -1.1000} values for q – correct values: {red.Testing Functional testing (black-box) Choose test values for the state and parameters – correct input – input to induce exceptions Test all combinations (combinatorial explosion) Example of functional testing Function f(p. 50. } path coverage: {-2. SC.-1. green. 1. if (a==b || b==c || a==c) return(IS). c) int a. return. { if (a>=b+c || b>=a+c || c>=a+b) return(IN). 21 . type triangle (a. 100} – exception values: {-100. Define a set of test cases for `triangle’ typedef enum {EQ. if (x%2) even=even+1.101. scalene or invalid.100] and q: {red. b.. IS. isosceles. and 1 point if you have specified the outcome of each test case. Use structural testing to measure degree of coverage. Module.Testing return (SC). c=a+b 9) 3 invalid: a>b+c. Use functional testing to select test cases. Maximum score is 10. b==c. b>a+c. system testing Module testing:test module in isolation using stubs and drivers Integration testing: test combinations of modules System: test the entire system Module testing 22 . } Test cases for `triangle’ 1) valid scalene 2) valid equilateral 3) valid isosceles 4) 3 isosceles: a==b. experienced professional programmers scored on average about 6. a==c 5) a=0 6) a=b=c=0 7) a=-1 8) 3 invalid: a=b+c. when the book was published. c>a+b Assume the compiler checks for correct types Myer scoring: 1 point for each test case. integration. In 1978. b=a+c.. like poor performance.Testing Poor testability. poor controllability and observability .cost of developing and maintaining stubs Bottom-up testing + better controllability and observability .executable available very late . h ./w failure) acceptance testing (by the client) regression testing (maintenance) Regression Testing As a consequence of the introduction of new bugs.g. mean-to-failure) recovery testing (e.cost of developing and maintaining drivers System testing Test the entire system in production environment Various forms: volume/stress testing (load limits) performance testing (efficiency) reliability testing (e. Theoretically. 24 . program maintenance requires far more system testing per statement than any other programming.g. interoperability revisional maintainability.. one must run the entire bank of test cases previously run against the system. Three dimensions of quality operational correctness (functional testing) reliability (reliability testing) efficiency (performance testing) integrity (recovery testing) usability (acceptance testing) transitional portability. testability Test documents Test plan: the specification of the test system test selection strategy e. and this is very costly. flexibility. reusability. . IEEE Standard 829 for S/W Test Document Test report report of the results of the test implementation Planning build the test `scaffolding’.Testing after each fix. to ensure that it has not been damaged in an obscure way ..g.).Testing Limits of Testing Concurrent / Distributed / Mobile systems –Concurrency –Non-determinism –Mobility Non-functional properties –Performance –Reliability. . Select what to cover. What is ET? concurrent. OR we want to go beyond the obvious tests. interacting test tasks • “Burst of Testing” could mean any of these: Study the product. Then they may be executed at some later time or by a different tester.Testing In scripted testing. tests are designed and executed at the same time. Exploratory testing is useful when… it is not obvious what the next test should be. Determine test oracles. Model the test space. In exploratory testing. 30 . and they often are not recorded. tests are first designed and recorded. Operate the test system. –Encourage variability among testers.Testing • Configure the test system. –Let yourself be distracted by anomalies and new ideas. – adapt their test strategy to fit the situation. Notice issues. the core practiceof a skilled tester Excellent exploratory testers… – challenge constraints and negotiate their mission. Organize notes. – train their minds to be cautious. benefit. –Test in short bursts. –Avoid repeating the exact same test twice. indecision. Learn the logic of testing –conjecture and refutation –abductive inference –correlation and causality –design of experiments –forward. –Get to know your developers. –Work in pairs or groups. Observe the test system. – tolerate substantial ambiguity. – have earned the trust placed in them. and human error –risk. backward. –Usually it’s best to use a diversified strategy. Exploit inconsistency. as needed. and critical. whenever possible. curious. and lateral thinking –biases. – take notes and report results in a useful and compelling way. Evaluate the test results. – spontaneously coordinate and collaborate. heuristics. – know the difference between observation and inference. –Use your confusion as a resource. – know how to design questions and experiments. and the meaning of “good enough” Practice critical reading and interviewing –Analyzing natural language specifications –Analyzing and cross-examining a developer’s explanation. Exploratory questions 31 . – have developed resources and tools to enhance performance. and time pressure. – alert their clients to project issues that prevent good testing. Getting the Most Out of ET Augment ET with scripted tests and automation. –Exploit subject matter expertise. Exploit the human factor. what you tested. gives bosses what they want. test plan Learn to model a product rapidly –flowcharting. specification. on my web site. consider using session-based test management to measure and control exploratory testing. –The product isn’t testable enough. Keep track of what areas you have and have not tested in. state model –matrices and outlines –function/data square –study the technology Use a “grid search” strategy to control coverage –Model the product in some way. then specify broad test areas in terms of that model (not specific test cases). – Don’t have enough of the right test data. test strategy. Verification Validation Dynamic and Testing Techniques 32 . –Notice the test ideas you use that lead to interesting results. and what issues you have. yet seemto you that it does? In what ways could your car work. –Packages ET into chunks that can be tracked and measured. test case. data flows. yet. risk. what problems you found. Practice responding to scrutiny –Why did you test that? –What was your strategy? –How do you know your strategy was worthwhile? Learn to spot obstacles to good testing –Not enough information. If you’re in a highly structured environment.. Develop and use testing heuristics –Try the heuristic test strategy model. –Practice critiquing test techniques and rules of thumb. –Record at least what your strategy was. Learn to take reviewable notes –Take concise notes so that they don’t interrupt your work. –Protects the intuitive process. then we can get 100% confidence. the more confidence we gain in the accuracy of the system input-output transformation. Black-box testing treats the system as “black-box”. The more the system input domains is covered in testing. Examples of generation of test data Exhaustive Testing Random Testing Systematic Way 33 . opaque-box. and closed-box testing. and the function of the black-box is understood completely in terms of its inputs and outputs. Generation of test data is a crucially important but a very difficult task. Black-box testing is based on the view that any model can be considered to be a function that maps values from its input domain to values in its output range. As an example. Many times. Black-box testing is used to access the accuracy of model input-output transformation. so it doesn’t explicitly use knowledge of the internal structure. Input Output Black-box testing is applied by feeding test data to model and evaluating the corresponding outputs. If we can test all input-output transformation paths. The content ( implementation ) of a black-box is not know. we operate very effectively with black-box knowledge. most people successfully operate automobiles with only black-box knowledge. Black-box testing is usually described as focusing on testing functional requirements. in fact this is central to object orientation. The concern is how accurately the model transform a given set of input data into a set of output data. functional. ’’ It is likely to uncover some errors in the system. For a reasonably large and complex simulation system. but it is very unlikely to find them all. the number of input-output transformation paths could be very large. we’ll never want such a simple printer.Testing Exhaustive Testing Void printBoolean (bool error) //Print the Boolean Value on the screen { if (error) cout<<“True”. } Integer 9 Nothing on the screen The simple model is designed to print every input integer on the screen. Random Testing Attempt testing in a haphazard way. Random testing is essentially a black-box testing strategy in which a system is tested by randomly selecting some subsets of all possible input values. Test data may be chosen randomly or by a sampling procedure reflecting “ the actual probability distribution on the input sequences. 34 . So it is virtually impossible to test all input-output transformation paths. This allows one to estimate the “ operational reliability”. the object of functional testing is to increase our confidence in model inputoutput transformation accuracy as much as possible rather than trying to claim absolute correctness. entering data randomly until we cause the system to fail. Therefore. Void printBoolean (int intValue) //Print the integer Value on the screen { if (inValue>10) cout<<inValue” else cout<<end1”. cout<<end1: } Unfortunately. else cout<<“False”. 35 . A Systematic Way of Testing There are strategies for testing in a systematic way. however. Host -----. Negativ e Values Three cases Zero Positive Values Field Testing Field testing known as live environment testing places the system in an operational (real) environment (i. It is necessary to demonstrate system capabilities for acceptance. we don’t attempt exhaustive testing. After all. for it can not print the input integer less than 10 on the screen.The environment in which the system will be used Which system should be tested in the target environment? Theoretically. their use however possible helps both the project team and decision makers to develop confidence in the model. there are strategies for testing in a systematic way. as well as boundaries and other special cases. Target environments are usually less convenient to work with than host environments. The purpose is collecting as much information as possible. as well as boundaries and other special cases. and find there is a error in the model. One goaloriented approach is to cover general classed of data. In such cases. We should test at least one example of each category of inputs. we may try over hundreds of time until finally we feed an integer 9 to it. One goal-oriented approach is to cover general classed of data.Testing It is not practical to test this model by running it with every possible data input. all testing should be conducted in the target environment. Fortunately. constraining all testing to the target environment can result can result in a number of problems.e. the practice of developing software in a different environment to the environment in which it will eventually be used has become common. The target environments may not yet be available. using real data as the input source). To test this simple printer model. Since the advent of high level language. it is the target environment in which the system will eventually be used. However. We should test at least one example of each category of inputs. Although it is usually difficult expensive and sometimes impossible to devise meaningful field tests for complex systems. the number of elements in the set of intValue is clearly too large.The development environment Target -----. such as an operating system or real-time kernel.g. or there may be major differences such as between a workstation and an embedded control processor. Acceptance testing has to be conducted in the target environment! The final stage of system testing. Error based testing is presented in nearly all heuristic approaches to testing. where the two environments may even have a different instruction set. “Close” refers the potential errors which could have occurred in the model being tested. in testing compliers. In addition. the folk of many applications areas consist of heuristic rules(e. the goal is to construct test cases that reveal the presence or absence of specific errors. acceptance testing. has to be conducted in the target environment.Testing Target environments and associated development tools are usually much more expensive to provide to system developer than host environments. It is rare for a host environment to be identical to a target environment. For example.Incorrect behavior of a system component This technique is used to insert a kind of fault or a kind of failure into the system and observe whether the system produces the invalid behavior as expected.Incorrect system component Failure-----. In the development of an expert system. such as input-output devices. Fault/failure insertion testing is an error-based testing technique. Unexplained behavior may reveal errors in system representation. Calls to target environment system. Fault/Failure Insertion Testing Fault ------. In error-based testing. There may just minor differences in configuration. Acceptance of a system must be based on tests of the actual system. and not a simulation in the host environment. Testing the expert system with actual data is a method for the validation of expert systems. Mutants The mutants are a set of models which are “close” to the model being tested. there is typically substantial asymmetry of expertise: the expert knows more about the domain than the developers of the system. Test data adequacy Test data set is adequate if the model run successfully on the data set and if all incorrect models run incorrectly. one of the first test cases tried is usually “null” program. Put the system in real life situations and see the direct effectiveness of the system in those situation. informal debugging sessions frequently include checks on extreme values of variables. Basic Methodology 36 . Test Implementation Direct access to target hardware. Examples of application of field testing It is especially useful for validating models of military combat systems. it is highly likely that the tested model is very likely to be correct. It can be used to uncover simple statement errors. If all mutants die. data flow errors. It is used by Lipton to uncover resistant error in production programs. D) -----.1]. If all mutants of the being tested model give incorrect results on execution by the test data D. Its application to regression testing is carried out by Demillo. then either the live mutants are functionally equivalent to the model or there still might be complex errors in the model. domain errors.Testing D -----.Mutation score. and coincidental correctness errors.Number of elements in E (model) DM (model. defined to the fraction of the nonequivalent mutants of the model. D) -----. Examples of fault/failure insertion testing Budd and Miller have studied fault/failure insertion testing as tool to uncover typographical errors in Matrix Calculation Programs. A low score indicates a weakness in the test data. dead code errors.Set of equivalent mutants of the model e -----.Set of the mutants that will return results differ from the results which dm -----. If some mutants are live and the test data D is adequate. A high score indicate that D is very close to being adequate for the model relative to the set of mutants of the model. The test data does not distinguish the test model form the mutant model which contain an error.Number of elements in DM (model. D) ms (model. dead branch errors. That is. A mutation score is a number in the interval [0. they will be indistinguishable form the model under the test data. special values errors. which are distinguished by the test data D.Test data used to test the model M (model) -----. differ from the model in containing a single error chosen form a given list of error types m -----. Some of the mutant models will turn out to be functionally equivalent to the model.A set of mutants of the model. D) = dm / (m-e) ms (model. we say they die on the execution. 37 .Number of elements in M (model) E (model) -----. Testing The Top 10 Testing Problems 10. Not enough time for testing 4. “Us vs. Train people in tool usage. Over-reliance on independent testers 3. Testers are in a “lose/lose” situation 1. –CSTE (Certified Software Test Engineer) Attend conferences. Measure the benefits. Lack of test tools 7. Not enough training 9. Base the case for test tools in costs vs. Them” mentality 8. 38 . benefits. Rapid change 2. Solutions to Educating Management in Testing Issues Cultural change is needed. Solutions to the Teamwork Challenge The goal is to get to “Us and them. Having to say “no” Solutions for Training Obtain formal training in testing techniques. Read books and articles. Seek Certification.” Each person on the team can have a role in testing: –Developers: unit and structural testing –Testers: independent testing –Users: business-oriented testing –Management: to support testing activities Solutions for Acquiring and Using Test Tools Identify a “champion” for obtaining test tools. Lack of management understanding/support of testing 6. Lack of customer and user involvement 5. Have a basic testing process in place. Train developers to become excellent testers. Perform user acceptance testing. Include users on the system test team. Focus on testable requirements. Solutions for Having to Say “No” 39 . –Testers are paid to find defects. Solutions to the Time Crunch Base schedules and estimates on measurable testing activities. –Each defect found is one more the customer or user will not find. Testers are not to blame for bottlenecks. Get management support for developer responsibility for quality. Manage the rate and degree of change.Testing Focus your message to management on: –reducing the cost of rework –meeting the project schedule The benefits of testing must relate to these two things to be persuasive. Solutions for Hitting a Moving Target The testing process must accommodate change. Understand the difference between the customer and users. Solutions for Fighting a Lose-Lose Situation The perception of testing must change. –Scripts to be executed –Cases to be tested –Requirements to be tested Have contingency plans for schedule slippage. Quality control is most effective when performed at the point of creation. Solutions to Overcoming Throwing Stuff Over the Wall Developers must take ownership and responsibility for the quality of their work. It is management’s responsibility to have an efficient process. Integrate automated testing tools to the project. Solutions to Identifying and Involving the Customer in Testing Involve the customer and users throughout the project by performing reviews and inspections. Use automated testing tools. ) Verification –All QC activities throughout the life cycle that ensure interim deliverables meet specific specifications. –accept the honest facts. software or system) meets specifications or user needs..g. QAI Workbench Model z Test Terminology (Cont’d.. Validation –The “test phase” of the life cycle which ensures that the end product (e. Keep the test results objective..Testing Most responsibility is on management to: –have a quality software development process in place. –understand that testing is only an evaluation activity. –have contingency plan in place in case of problems. When Testing Occurs 40 . ). when executed successfully.Testing Test Terminology (Cont’d. satisfy management that the system meets specifications –Validates that the system was built right. User Acceptance Testing –Testing to ensure that the system meets the need of the organization and the end user/customer –Validates that the right system was built. Integration Testing –Testing –A performed on groups of related modules to ensure data and control are passed properly between modules. System Testing predetermined combination of tests that. Regression Testing –Testing after changes have been made to ensure that no unwanted changes were introduced to the software or system. Functional Tests –Tests that validate business requirements –Tests what the system is supposed to do Black Box Tests –Functional testing –Based on external specifications without knowledge of how the system is constructed 41 . standalone module or unit of code. both functional and structural testing need to be performed. To effectively test systems. The Economics of Testing .Making the Message to Management Where Defects Originate Where Testing Resources are Used 42 . the more intense the test should be The higher the test coverage. the more confidence you’ll have in the test 43 . spend time early in the system development (or purchase) process to make sure the requirements and design are correct. Basic Testing Principles Test early and often Involve everyone on the project Management support is critical The greater the risk. If you want to reduce the cost of testing. Test Strategy Planning Step 1 . Testing –Reliability Who Will Conduct Testing? –Users –Developers What Are the Tradeoffs? –Schedule –Cost/Resources –Quality How Critical is the System to the Organization? –Risk Assessment Risk Assessment A Tool For Performing Risk Assessment 45 . Test plans should be specific.Execute Tests Step 4 . Step 2 . yet flexible for change. –There should be a one-to-one correspondence between system objectives and test objectives. Test plans should be reviewed just as any other project deliverable.Set Test Objectives Step 2 . the easier the test.Evaluate/Report Test Results Step 1 .Set Test Objectives Select test team Perform risk assessment Define test objectives –A test objective is what the test is to validate.Testing Effective Testing Methods and Techniques The QAI Testing Process Step 1 .Develop Test Plan Step 3 . The test plan should be easily read by management. Major Elements of a Test Plan Introduction 46 .Develop Test Plan The better the test plan. Test planning should be a team activity.. roughly one-third of the time can be allocated each to: –Test planning –Test execution –Test evaluation Tips for Test Planning Start early. Keep the test plan concise and readable. Step 3 . May be manual or automated Automated Tools 47 .Execute Tests Select test tools Develop test cases Execute tests Select Test Tools A test tool is any vehicle that assists in testing. Frequently have the test team review the test plan. Keep the test plan flexible to deal with change. .Testing Not the complete solution. Testing Critical Success Factors Get senior management support for buying and integrating test tools Know your requirements Be reasonable in your expectations .Start small and grow Have a strong testing process that includes tools Don’t cut the training corner Regression Testing Why perform regression testing? The process The issues The role of automated testing tools How much is enough? No Regression Testing: Hidden Defects 49 . There must be a way to conduct two identical tests. 50 .Testing Regression Testing: No Hidden Defects Regression Testing .The Process Regression Testing Issues Test data must be maintained. How Much is Enough? The easy answer: “It depends. Consider manual vs.) Tips for Performing Regression Testing Control the scope of testing. Build a reusable test bed of data. Step 3 . the less effective the regression test.” What does it depend on? –Risk –Scope of the change –System dependencies Proving the Value of Regression Testing You need a benchmark of non-regression testing.g. Build a repeatable and defined process for regression testing. –Data conversion may be required –Date dependencies The greater the difference between versions. Regression Testing .Testing There must be a way to compare two identical tests. There must be a stable baseline version for comparisons. Base the amount of regression testing on risk. Consider initial investment in creating test environment and test scripts/procedures Measure time and defects. automated. Some tests cannot use previous versions of test data.Execute Tests Develop Test Cases Functional Techniques –Requirements-based –Process-based 51 . Fewer test cases needed to find more defects. Return on Investment (ROI) can include: –Shorter test times –More accurate testing –More consistent testing –Improved communication of defects –More effective testing (e. Use automated tools. .Testing –Data-oriented –Boundary value analysis –Decision tables –Equivalence partitioning Structural Techniques –Complexity analysis –Coverage •Statement •Branch •Condition •Multi-condition •Path Step 4 . .Statistical Process Control Level 3 .Verification Level 1 .Defect Management Level 2 .Validation Software Testing 54 . .. the process of executing a program with the intent of finding errors. fault-tolerance or security. more testing will usually reveal more bugs. Inspections A Closer Look: Fault Based Methods Conclusion Introduction to Software Testing Software testing is an vital part of the software lifecycle." ". Besides finding faults... According to Humphrey [1]." Of course. software testing is defined as 'the execution of a program to find its faults'.. 55 . Testing is the measurement of software quality .Testing Table of Contents Introduction to Software Testing Basic Methods Testing Levels Unit testing Integration testing External Function Testing System Testing Regression Testing Acceptance Testing Installation Testing Completion Criteria Metrics Organization Testing and SQA.. Thus. any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results... This is the question of 'good enough software'. For projects of a large size. but not the absence of problems. To understand its role. The question then becomes when to stop testing. This sounds simple enough. but there is much to consider when we want to do software testing. and what is an acceptable level of bugs. Testing often becomes a question of economics." ". a successful test is one that finds a defect. Among alternative definitions of testing are the following: ". none of these definitions claims that testing shows that software is free from defects... the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results . safety.. it is instructive to review the definition of software testing in the literature. Testing can show the presence. It is important to remember that testing assumes that requirements are already validated. we may also be interested in testing performance.. throughput. which measure the fraction of code exercised by test cases. A common goal of white-box testing is to ensure a test case exercises every path through a program. Black Box Testing Black box tests are performed to assess how well a program meets its requirements. is not a single phase of the software lifecycle. memory usage. device utilization. A fundamental strength that all white box testing strategies share is that the entire software implementation is taken into account during testing. This requires the tester to have detailed knowledge of the internal structure. Performance tests evaluate response time. Stress tests push the system to or beyond its specified limits to evaluate its robustness and error handling capabilities. Software testing must be considered before implementation. counting failures over time to measure or certify reliability. 56 . It is a set of activities performed throughout the entire software lifecycle. In considering testing. most people think of the activities described in figure. The larger a piece of code is when defects are detected. and execution time.Testing Basic Methods White Box Testing White box testing is performed to reveal problems with the internal structure of a program. Testing Levels Different Levels of TestTesting occurs at every stage of system construction. Functional tests typically exercise code with valid or nearly valid input for which the expected output is known. The different levels of testing reflect that testing. as is suggested by the input arrows into the testing activities. The effectiveness or thoroughness of white-box testing is commonly expressed in terms of test or code coverage metrics. the harder and more expensive it is to find and correct the defects. in the general sense. The activities after Implementation are normally the only ones associated with testing. looking for missing or incorrect functionality. Reliability tests monitor system response to representative user input. This includes concepts such as 'boundary values'. which facilitates error detection even when the software specification is vague or incomplete. It is also easier to locate and remove bugs at this level of testing. untested sub-systems. Bottom-up combines and tests low-level units into progressively larger modules and subsystems. Modules can be integrated in various clusters as desired. or system. if not exhaustively.Testing The following paragraphs describe the testing activities from the 'second half' of the software lifecycle. to make sure the units work together. and it is difficult to fix it Integration testing exercises several units that have been combined to form a module. Unit Testing Unit testing exercises a unit in isolation from the rest of the system. There are three main approaches to integration testing: top-down. the prevalent integration test 'method'. subsystem. implemented by a single developer. 'Big bang' testing is. Developers are normally responsible for the testing of their own units and these are normally white box tests. bottom-up and 'big bang'. Top-down combines. and debugs top-level routines that become the test 'harness' or 'scaffolding' for lower-level units. Bottom-up Allows early testing aimed t proving feasibility and practicality of particular modules. classes). This is waiting for all the module units to be complete before trying them out together. Major emphasis is on module functionality and performance. The small size of units allows a high level of code coverage. Advantages No test stubs are needed It is easier to adjust manpower needs Errors in critical modules are found early Disadvantages Test drivers are needed Many modules must be integrated before a working program is available Interface errors are discovered late Comments At any given point. Top-Down The control program is tested first Modules are integrated one at a time Major emphasis is on interface testing 57 . as we must have a certain knowledge of the units to recognize if we have been successful in fusing them together in the module. more code has been written and tested that with top down testing. Some people feel that bottom-up is a more intuitive test philosophy. unfortunately. Integration Testing One of the most difficult aspects of software development is the integration and testing of large. The nature of this phase is certainly 'white box'. The main characteristic that distinguishes a unit is that it is small enough to test thoroughly. Integration testing focuses on the interfaces between units. A unit is typically a function or small collection of functions (libraries. tests. The integrated system frequently fails in significant and mysterious ways. a project may leave one of them out. or it may read and return test data from a file. System Testing The 'system test' is a more robust version of the external test. the platform must be as close to production use in the customers’. they are integrated all at once. By replicating the target environment. This approach waits for all the modules to be constructed and tested independently. security and fault-tolerance). or we may not have enough time to run both. A stub might consist of a function header with no body. we can more accurately test 'softer' system features (performance. it frequently reveals more defects than the other methods. there is really nothing that can be demonstrated until later in the process. While this approach is very quick. or obtain data from the tester. return hard-coded values. Stub creation can be a time consuming piece of testing. Here the system will receive more realistic testing that in the 'system test' phase. Acceptance Testing An acceptance (or beta) test is an exercise of a completed system by a group of end users to determine whether the system is ready for deployment. These errors have to be fixed and as we have seen. In addition. Testers will run tests that they believe reflect the end use of the system. Stubs stand-in for finished subroutines or sub-systems. It may be too expensive to replicate the user environment for the system test. The cost of drivers and stubs in the top-down and bottom-up testing methods is what drives the use of 'big bang' testing. This phase is sometimes known as an alpha test. It is hard to maintain a pure top-down strategy in practice. Regression Testing Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affect 58 . Because of the similarities between the test suites in the external function and system test phases. The essential difference between 'system' and 'external function' testing is the test platform. errors that are found 'later' take longer to fix. In system testing. as the users have a better idea how the system will be used than the system testers. Integration tests can rely heavily on stubs or drivers. including factors such as hardware setup and database size and complexity. and when they are finished. environment. and can be known as an alpha test. External Function Testing The 'external function test' is a black box test to verify the system correctly implements specified functions. like bottom up. Instead. A non-Y2K bug in the original software should not have been fixed by the Y2K work. Regression testing has been receiving more attention as corporations focus on fixing the 'Year 2000 Bug'. automated sets of procedures designed to exercise all parts of a program and to show defects. Four things can happen when a developer attempts to fix a bug. If I put a new radio in my car. from an existing test set. Some common examples are: All black-box test cases are run White-box test coverage targets are met 59 .Testing other system components. especially near the end of the development cycle. to which the answer is 'The same person probably told you it worked in the first place'. Completion Criteria There are a number of different ways to determine the test phase of the software life cycle is complete. A new 'Y2K' version of the system is compared against a baseline original system. A regression test selection technique chooses. This means not only do they do the same things correctly. they seek to select all tests that exercise changed or affected program components. and one is good: Successful Change Unsuccessful Change New Bug Bad Bad No New Bug Good Bad Because of the high probability that one of the bad outcomes will result from a change to the system. A frequently asked question about regression testing is 'The developer says this problem is fixed. Why do I need to re-test?’. There are three main groups of test selection approaches in use: Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun. do I have to do a complete road test to make sure the change was successful?) A new breed of regression test theory tries to identify. While the original suite could be used to test the modified software. Coverage approaches are also based on coverage criteria. they also do the same things incorrectly. but do not require minimization of the test set. Installation Testing The testing of full. it is necessary to do regression testing. the tests that are deemed necessary to validate modified software. or upgrade install/uninstall processes. (Ex. the performance of the two versions should be identical. It can be difficult to determine how much re-testing is needed. through program flows or reverse engineering. Three of these things are bad. With the obvious exception of date formats. An interesting approach to limiting test cases is based on whether we can confine testing to the "vicinity" of the change. where boundaries can be placed around modules and subsystems. These graphs can determine which tests from the existing suite may exhibit changed behavior on the new version. partial. Safe attempt instead to select every test that will cause the modified program to produce different output than original program. this might be very timeconsuming. The goal of most Y2K is to correct the date handling portions of their system without changing any other behavior. Most industrial testing is done via test suites. If we assume (questionable assumption) that the testers find the same percentage of seeded errors as real errors. Defect density is represented by: # of Defects / System Size where system size is usually expressed in thousands of lines of code or KLOC. if we find half the seeded errors. The most popular relate to inconsistent definitions of defects and system sizes. Then what does it mean when testing does not detect any errors? We can say that either the software is high quality or the testing process is low quality.. We assume the system has X errors. the major goal of testing is to discover errors in the software. it is more important to know what they are trying to achieve. Rather than discuss the merits of specific measurements. A simplistic case of error estimation is based on "error seeding". A secondary goal is to build confidence that the system will work without error when testing does not reveal any errors. we move naturally into a discussion of software testing metrics. Metrics Goals As stated above. Defect density accounts only for defects that are found in-house or over a given amount of operational field use. After a testing. Other metrics attempt to estimate of how many defects remain undetected. Although it is a useful indicator of quality when used consistently within an organization. there are a number of well documented problems with this metric. It is artificially seeded with S additional errors. The most commonly used means of measuring system quality is defect density. we have discovered Tr 'real' errors and Ts seeded errors. we can calculate X: S / (X + S) = Ts / (Tr + Ts) X = S * ((Tr + Ts) / Ts -1) For example. To answer either of these concerns we need a measurement of the quality of the system. As with all domains of the software process. We need metrics on our testing process if we are to tell which is the right answer. there are hosts of metrics that can be used in testing. then the number of 'real' defects found represents half of the total defects in the system. 60 . Three themes prevail: Quality Assessment (What percentage of defects are captured by our testing process.Testing Rate of fault discovery goes below a target value Target percentage of all faults in the system are found Measured reliability of the system achieves its target value (mean time to failure) Test phase time or resources are exhausted When we begin to talk about completion criteria. The failure of each component is rated by Impact and Likelihood. the higher rating on each scale corresponds to the overall risk involved with defects in the component. By tracking our test efficiency and effectiveness. this might be represented visually: The relative importance of likelihood and impact will vary from project to project and company to company. This data is extrapolated to predict overall uptime and the expected time the system will be operational. If a high percentage of customer reported defects were not revealed in-house. Availability. 61 . A good defect reporting structure will allow defect types and origins to be identified. Together. A truism is that there is never enough time or resources for complete testing. These measurements allow us to prioritize our testing and repair cycles. Sometimes measured with MTTF is Mean Time To Repair (MTTR). based on what would happen if the component malfunctioned. is the probability that a system is available when needed. A system level measurement for risk management is the Mean Time To Failure (MTTF). Risk Management Metrics involved in risk management measure how important a particular defect is (or could be). Popular measures of the testing process report: Effectiveness: Number of defects found and successfully removed / Number of Defect Presented Efficiency: Number of defects found in a given time It is also important to consider reported system failures in the field by the customer. While these are reasonable measures for assessing quality. we can evaluate the changes made to the testing process. Obviously. One approach is known as Risk Driven Testing. Likelihood is an estimate of how probable it is that the component would fail. Process Improvement It is generally accepted that achieve improvement you need a measure against which to gauge performance. they are more often used to assess the risk (financial or otherwise) that a failure poses to a customer or in turn to the system supplier. We can use this information to improve the testing process by altering and adding test activities to improve our changes of finding the defects that are currently escaping detection. It is very important to consider maintenance costs and redevelopment efforts when deciding on value of additional testing. Test data sampled from realistic beta testing is used find the average time until system failure. Impact is a severity rating. Impact and Likelihood determine the Risk for the piece. it is a significant indicator that the testing process in incomplete. With a rating scale. making prioritization a necessity. To improve our testing processes we the ability to compare the results from one process to another. where Risk has specific meaning.Testing Estimating the number and severity of undetected defects allows informed decisions on whether the quality is acceptable or additional testing is cost-effective. obtained by calculating MTTF / (MTTF + MTTR). This represents the expected time until the system will be repaired and back in use after a failure is observed. Independent testing is typically more efficient at detecting defects related to special cases. An important consideration would be the size of the organization. there are several mistakes that organizations typically make. The errors fall into (at least) 4 broad classes: Misunderstanding the role of testing. Testing Problems When trying to effectively implement software testing. improvement through measurement.. The purpose of testing is to discover defects in the product. interaction between modules. Pros Testers are usually the only people to use a system heavily as experts. The cost of maintaining separate test groups The key to optimizing the use of separate test groups is understanding that developers are able to find certain types of bugs very efficiently. 62 . the test group expends resources executing tests developers have already run. more of the defects in the product will likely be detected. This mentality also can lead to incomplete configuration testing and inadequate load and stress testing. Software Testing Organization Test GroupsThe following summarizes the Pros and Cons of maintaining separate test groups . Test plans often over emphasize testing functionality at the expense of potential interactions. and recommending actions.. and system level usability and performance problems Programmers are neither trained.g. analysis and feedback is what is needed. designers may have to wait for responses from the test group to proceed.Testing Testing metrics give us an idea how reliable our testing process has been at finding defects.. reporting status. and testers have greater abilities in detecting other bugs. Neglecting to test documentation and/or installation procedures is also a risky decision. Poor planning of the testing effort. The detection of the defects happens at a later stage. nor motivated to test Overall. It must be remembered that measurement is not the goal. Furthermore. This problem can be exacerbated in situations where the test group is not physically collocated with the design group. it is important to have an understanding of the relative criticality of defects when planning tests. Test groups can provide insight into the reliability of the software before it is actually shipped Cons Having separate test groups can result in duplication of effort (e. and can is a reasonable indicator if its performance in the future. and the criticality of the product. Having said that. Just as programmers often prefer coding to design. Testers can review their test plans with developers as they are creating their designs. The role of testing should not be relegated to junior programmers. Poor testing methodology. An important point about inspections is that they can be performed much earlier in the design cycle. Inspections Inspections are undoubtedly a critical tool to detect and prevent defects. and need not be limited to people who can program. Inspections are strict and close examinations conducted on specifications. using code coverage as a performance goal for testers. This is illustrated in figure : Figure : Defect Detection and cost to correct (Source: McConnell) 63 . Thus the developer may be more aware of the potential defects and act accordingly. The tests must verify that product does what it is supposed to. Testing and SQA. testers can be too focussed on running tests at the expense of designing them. while not doing what it should not. A test team that lacks diversity will not be as effective. test. In any case. A test group should include domain experts. design. both in terms of time and money.Testing Using the wrong personnel as testers. and other artifacts. the detection of defects early is critical. code. the lower the cost. As well. or ignoring coverage entirely are poor strategies. testing is something that can be started much earlier than is normally the case. nor should it be a place to employ failed programmers. the closer to the time of its creation that we detect and remove a defect. well before testing begins. they are not enough to replace testing. Second. In short. inferences are made on the number of remaining ‘real’ faults. if it exists. as systems become more complex the chances of one person understanding all the interfaces and being present at all the reviews is quite small. code reading detects twice as many defects/hour as testing. The literature (Humphrey 1989) reports cases where: inspections are up to 20 times more efficient than testing. Mutation testing injects faults into code to determine optimal test inputs. Therefore. failures/execution time) that is unobtainable from inspections. Inspections could replace testing if and only if all information gleaned through testing could be obtained through inspection.Testing Evidence of the benefits of inspections abounds. one cannot replace the other. This measure can often be used as a vital input to the release decision. Thirdly.e. timing/synchronization). they recognize that the amount of testing (which is product focussed) required in order to demonstrate high reliability is impractical. Voas et. Fault Injection evaluates the impact of changing the code or state of an executing program on behavior of the software. quality processes cannot demonstrate reliability and the testing necessary to do so is impossible to perform. testing can identify defects due to complex interactions in large systems (e. al. In the face of all this evidence. Fault seeding. Fault based methods include Error Based Testing. it has been suggested that "software inspections can replace testing".g. inspections resulted in a 10x reduction in cost of finding errors. depending on the product. particularly in the case of mission critical systems. 64 . suggests that the traditional belief that improving and documenting the software development process will increase software quality is lacking. For this to be valid the seeded faults must be assumed similar to the real faults. 80% of development errors were found by inspections. equally important information. and fault injection. Firstly. testing identifies system level performance and usability issues that inspections cannot. While the benefits of inspections are real. However. fault injection will be discussed in more detail. since inspections and testing provide different. testing can provide a measure of software reliability (i. Based on the number of these artificial faults discovered during testing. mutation testing. among others. Error based testing defines classes of errors as well as inputs that will reveal any error of a particular class. Yet. the optimal mix of inspections and testing may be different! A Closer Look: Fault Based Methods The following paragraphs will describe some newer techniques in the software testing field. While inspections can detect this event. After briefly describing each of the 4 techniques. Fault seeding implies the injection of faults into software prior to test. This is not true for several reasons. These methods attempt to address the belief that current techniques for assessing oftware quality are not adequate. Properly used. The injection of faults into software is not so widespread. a random number generator) you can quickly determine how often corrupted values of X lead to undesired values of T. whether or not systems are fail-safe. testing has its own unique challenges. etc. 65 . and Hughes Electronics have applied the techniques or are considering them. well planned testing efforts will only increase. A successful test strategy will begin with consideration during requirements specification. By using perturb(x) to generate changed values of X (i. which may be a "black box" Conclusion Software testing is an important part of the software development process. as well as to 3rd party software. Microsoft. Testing details will be fleshed through high and low level system designs. the importance of effective. fault insertion can give insight as to where testing should be concentrated. but is part of each stage of the lifecycle. how much testing should be done. It is not a single activity that takes place after code implementation. It is as simple as pulling the modem out of your PC during use and observing the results to determine if they are safe and/or desired.. As software systems become more and more complex.e. The technique can be applied to internal source code. Hardware design techniques have long used inserted fault conditions to test system behavior.Testing Fault injection is not a new concept. though it would appear that companies such as Hughes Information Systems. As with the other activities in the software lifecycle. and testing will be carried out by developers and separate test groups after code implementation.. design. Software testing demonstrates that software function appear to be working according to specifications and performance requirements. Testing Objectives: 66 . Principles.Testing Software Testing Techniques Software Testing Fundamentals Testing Objectives. and coding. ” Understandability:”The more information we have. the fewer the disruptions to testing.” Simplicity: “The less there is to test. the more efficiently it can be tested. A successful test is one that uncovers an as-yet undiscovered error. Software Testing Principles Davids [DAV95] suggests a set of testing principles: All tests should be traceable to customer requirements. we can more quickly isolate problems and perform smarter retesting. 80% of all errors uncovered during testing will likely be traceable to 20% of all program modules.” Controllability: “The better we can control the software. the more quickly we can test it. testing should be conducted by an independent third party. To be most effective.” Test Case Design 67 . the smarter we will test. Testing should begin “in the small” and progress toward testing “in the large”.Testing Myers [MYE79] states a number of rules that can serve well as testing objectives: Testing is a process of executing a program with the intent of finding an error. Exhaustive testing is not possible. A set of program characteristics that lead to testable software: Operability: “the better it works. The major testing objective is to design tests that systematically uncover types of errors with minimum time and effort.” Decomposability: “By controlling the scope of testing. Tests should be planned long before testing begins. the more the testing can be automated and optimized.” Stability: “The fewer the changes. A good test case is one that has high probability of finding an undiscovered error. The Pareto principle applies to software testing.” Observability: “What you see is what you test. Software Testability Software testability is simply how easily a computer program can be tested. control flows. operations. design tests to exercise all internals of a software to make sure they operates according to specifications and designs Major focus: internal structures. White-Box Testing and Basis Path Testing White-box testing.Testing Two general software testing approaches: Black-Box Testing and White-Box Testing Black-box testing: knowing the specific functions of a software. conditions. etc. external data and information White-box testing: knowing the internals of a software. Exercise internal data structures to assure their validity. Major focus: functions. Used as a guide for defining a basis set of execution path. Using white-box testing methods. Exercise all logical decisions one their true and false sides. Execute all loops at their boundaries and within their operational bounds. design tests to demonstrate each function and check its errors. loops. Basic path testing (a white-box testing technique): First proposed by TomMcCabe [MCC76]. we derive test cases that Guarantee that all independent paths within a module have been exercised at least once. Can be used to derive a logical complexity measure for a procedure design. Guarantee to execute every statement in the program at least one time. external interfaces. It is a test case design method that uses the control structure of the procedural design to derive test cases. also known as glass-box testing. 68 . logic paths. data flows internal data structures. . the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program. 69 . Three ways to compute cyclomatic complexity: The number of regions of the flow graph correspond to the cyclomatic complexity. Step 4: Prepare test cases that will force execution of each path in the basis set. When this metric is used in the context of the basis path testing. Path 6: 1-2-3-4-5-6-7-8-9-2-. For example. Cyclomatic complexity. V(G). Path 1: test case: value (k) = valid input. Step 3: Determine a basis set of linearly independent paths. V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G... Cyclomatic complexity.Testing Cyclomatic Complexity Cyclomatic complexity is a software metric provides a quantitative measure of the global complexity of a program. draw a corresponding flow graph. where k < i defined below. Step 2: Determine the cyclomatic complexity of the resultant flow graph. where 2 <= I <= 100 expected results: correct average based on k values and proper totals. value (i) = -999. for a flow graph G is defined as V(G) = E . Deriving Test Cases Step 1 : Using the design or code as a foundation. An input condition is: a specific numeric value. one valid and two invalid equivalence class are defined. If an input condition is Boolean.the area code may or may not be present. An equivalence class represents a set of valid or invalid states for input condition.value defined between 200 and 900 70 . one valid and one invalid classes are defined. Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an input domain. or a Boolean condition Equivalence Classes Equivalence classes can be defined using the following guidelines: If an input condition specifies a range. Boolean . one valid and one invalid equivalence classes are defined. If an input condition specifies a member of a set. a range of values a set of related values. If an input condition requires a specific value. one valid and two invalid equivalence classes are defined. range . Examples: area code: input condition. input condition.Testing Equivalence Partitioning Equivalence partitioning is a black-box testing method divide the input domain of a program into classes of data derive test cases based on these partitions. containing commands noted before. test cases should be designed with value a and b. full element. 102. 10. Example: Integer D with input condition [-3. be certain to design a test case to exercise the data structure at its boundary Such as data structures: array input condition: empty. single element. test values: -3. input condition. value . 5 Guidelines 1 and 2 are applied to output condition. 100. 0 If an input condition specifies a number values. 200.a password nay or may not be present. Boundary Value Analysis a test case design technique complements to equivalence partition Objective: Boundary value analysis leads to a selection of test cases that exercise bounding values. out-of-boundary search for element: element is inside array or the element is not inside array You can think about other data structures: 71 . Boolean . test cases should be developed to exercise the minimum and maximum numbers. Example: Enumerate data E with input condition: {3. command: input condition. Guidelines: If an input condition specifies a range bounded by values a and b. -2.Testing password: input condition.six character string. 102} test values: 3. just above and below a and b. 11. set . If internal program data structures have prescribed boundaries. Values just above and below minimum and maximum are also tested. 5. -1. 10]. Testing list. and tree 72 . set. stack. queue. Testing 73 . This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/37245457/6726780-Testing-Fundamentals-2
CC-MAIN-2015-48
refinedweb
12,148
51.65
The Samba-Bugzilla – Bug 1631 SuSE 9.1 compile with -O2 generates buggy code Last modified: 2005-08-24 10:20:52 UTC This is not your fault, I expect it is a C compiler problem, but it might be worthwhile knowing about. I just upgraded to SuSE Linux Professional 9.1 and re-built 3.0.6rc2 with gcc 3.3.3. Our RPM build script was compiling SAMBA with -O2, and I noticed that the built rpcclient did not work (whilst the version built from my sandbox, ie. with debugging symbolbs worked). Although I haven't gone so far as to look at the assembly, it turns out that compiling SAMBA with -O2 on SuSE 9.1 generates buggy code in the mdfour() routine. Compiling with -O or without optimization works fine. I noticed this by stepping through rpcclient and looking at the resultant OWF generated by mdfour() -- only when compiled without -O2 did the output match the OWF in the directory (ie. MD4 of the UCS2-LE password). System/gcc specs follow: lukeh@off/power-station[75]% uname -a Linux power-station 2.6.5-7.104-default #1 Wed Jul 28 16:42:13 UTC 2004 i686 i686 i386 GNU/Linux lukeh@off/power-station[76]% has been reproduced by others on the mailing list. why is this our bug ? We've seen this already with 9.0. For the builds of current Samba packages for this version I've added -O to the CFLAGS to get a proper smbpasswd binary. While discussing thie issue the last time with one of our compiler developers, he argued that this is more likely a Samba and not a gcc problem. I've informed him about this bug. Lars, if you can let us know what to change to work around this, we will make the change in our code. lars, i have run into the same problem. using suse 9.1 on a dual-p4 with ht enabled with your samba-3.0.7 rpms. when trying to change a password from a win2k client: [2004/10/19 21:50:16, 0] libsmb/smbencrypt.c:decode_pw_buffer(539) decode_pw_buffer: incorrect password length (-727309796). [2004/10/19 21:50:16, 0] libsmb/smbencrypt.c:decode_pw_buffer(540) decode_pw_buffer: check that 'encrypt passwords = yes' [2004/10/19 21:51:00, 1] smbd/service.c:make_connection_snum(648) here's what happens when using smbpaswd on the command line: nettings@pol-serv1:~> smbpasswd Old SMB password: New SMB password: Retype new SMB password: read_socket_with_timeout: timeout read. read error = Connection reset by peer. machine 127.0.0.1 rejected the tconX on the IPC$ share. Error was : Read error: Connection reset by peer. Failed to change password for nettings the log says: [2004/10/19 22:05:39, 0] lib/access.c:check_access(328) Denied connection from (127.0.0.1) [2004/10/19 22:05:39, 1] smbd/process.c:process_smb(1085) Connection denied from 127.0.0.1 is there a quick workaround? The Samba 3 RPMs from Samba.org and are build with -O and should not have this problem. Are you using the following package: Name : samba Relocations: (not relocatable) Version : 3.0.7 Vendor: SuSE Linux AG, Nuernberg, Germany Release : 1.1 Build Date: Do 16 Sep 2004 12:01:27 CEST Install date: (not installed) Build Host: prokofjieff.suse.de Jörn: Is your smbd listening to localhost? *** Bug 1720 has been marked as a duplicate of this bug. *** (In reply to comment #6) yes. (In reply to comment #7) > Jörn: Is your smbd listening to localhost? yes again. Just for the sake of completeness: As I mentioned on mailing list with CFLAGS="-O2" and SuSE's GCC 3.3.1 our clients can't join the Samba PDC domain successfully. The smbd ends up with: [2004/06/03 19:29:58, 0] libsmb/smbencrypt.c:decode_pw_buffer(521) decode_pw_buffer: incorrect password length (-1061250291). [2004/06/03 19:29:58, 0] libsmb/smbencrypt.c:decode_pw_buffer(522) decode_pw_buffer: check that 'encrypt passwords = yes' and "incorrect user name" is displayed by the client. Created attachment 769 [details] patch to compile MD4 with O2 works for me, don't ask my why now hash results are correct. DB thanks for the patch. I've informed one of our compiler guys and hope to get an explanation why your changes fix it. I'll also test your patch in our build system and keep ypu up to date. Comment from a compiler guy: No. The changed code doesn't "fix" any obvious problems which would lead to miscompilations (if it's still equivalent, which I can't determine from just the patch). So if it works now, then only by accident, because of changed order of the code. That could very well hide the bug. # end comment So we have to investigate further. Lars, could you please forward this simple piece of test code I developed from the md4 routine to your compiler guy? Code seems ok, but output with SuSE 3.3.1 and 3.3.3 (i386/SuSE 8.2/9.1) differs between -O and -O2!!! Some people with non-SuSE GCCs confirmed me that it compiles to same result with or without -O2, I only have SuSE gcc for testing :-) So maybe an SuSE issue with one of your vendor patches? #include "stdio.h" unsigned int A,B,C; unsigned int i; unsigned int Q[1]; unsigned int H(unsigned int X, unsigned int Y) { return X*Y; } int main() { A=B=C=100000; A+=H(B,C); C+=H(A,B); A+=H(B,C); C+=H(A,B); for (i=0;i<1;i++) Q[i] = 0; printf("%u\n",A); return 0; } Michael Matz answered: This is a very nice example. The problem is, that the second call to "H(B,C)" is removed. The SuSE GCC correctly determines that H is a const function (i.e. doesn't access global memory, and only depends on its arguments), which the mainline GCC 3.3.x is not able to. But then it somehow thinks that C is not changed in between the two calls, so (because H is const) this second call to H(B,C) is not needed, because it would result in the same value, which already was computed, so it's optimized away. Of course C _does_ change between the two calls, and hence such argumentation can't be applied. This does not happen on the 9.2 3.3.4 based compiler, but I need to investigate if we really fixed it by chance, or if the bug is just hidden. This is fixed in all SuSE Linux post 9.1 products. The bug is indeed not only hidden, but fixed for good in the SuSE Linux 9.2 gcc. Therefore I'm now using -O for pre 9.2 products and the default -O2 for all post 9.1 in our spec file. originally against 3.0.6rc2 sorry for the same, cleaning up the database to prevent unecessary reopens of bugs.
https://bugzilla.samba.org/show_bug.cgi?id=1631
CC-MAIN-2016-44
refinedweb
1,179
76.11
I want my program to be able to read an array of integer type and count the frequency of each integer in the array. For example, if my array consist of 10 components = {1,1,2,3,1,2,1,1,5,4}, I need my program to be able to say that there are five 1's, two 2's, one 3, one 4, and one 5. I could only go as far as the first occurring integer, something is wrong with my second outer for loop, I think Here's my code so far: import java.util.*; public class Yellow { static Scanner console = new Scanner(System.in); public static void main(String[] args) { int numOfElem; System.out.println("Enter how many integers you want to input into the array: "); numOfElem = console.nextInt(); System.out.println(); int[] list = new int[numOfElem]; // int i; int freq = 1; System.out.println("Enter " + numOfElem + " integers: "); for (int i = 0; i < numOfElem; i++) list[i] = console.nextInt(); System.out.println(); for (int i = 0; i < numOfElem; i++) { System.out.print(list[i] + " "); int searchItem = list[i]; for (i = i + 1; i < numOfElem; i++) { if (list[i] == searchItem) freq++; else freq = freq; } System.out.print(" " + freq); System.out.println(); } System.out.println(); } } Thanks for any help.
http://www.javaprogrammingforums.com/member-introductions/7747-hello-im-new-i-need-help.html
CC-MAIN-2014-10
refinedweb
212
65.52
file public class Bean { private Service service; public someMethod() { // notice I didn't instantiate the service class any where service.executeTask(); } } // XML file (here there is information on how to instantiate the service, but the container decides when to instantiate it) <bean id="service" class="com.ee.Service" /> <bean id="bean" class="com.ee.Bean"> <property name="service" ref="service" /> </bean> Select all Open in new window // an interface that defines methods for connecting to a database, and other DB related stuff public interface IDao { } // an implementation of that interface, specific to Oracle databases public class OracleDao implements IDao { } // an implementation of that interface, specific to MySql databases public class MySqlDao implements IDao { } // business class that need the dao's methods public class BusinessObject() { private IDao dao; } // XML file <bean id="oracleDao" class="com.ee.OracleDao" /> <bean id="mySqlDao" class="com.ee.MySqlDao" /> <bean id="businessObject" class="com.ee.BusinessObject"> <!-- here you choose which implementation to use. it is currently using Oracle, but if you want to change databases later, you just replace ref="oracleDao" for ref="mySqlDao", and there is no need to recompile the code --> <property name="dao" ref="oracleDao" /> </bean>... Could you please tell me a little more on this line? >> The basic concept of the Inversion of Control pattern (also known as dependency injection) is that you do not create your objects but describe how they should be created. I dont really get, "You do not create your objects but describe how they should be attached code, this is kind of an example. Open in new window Your examply definitely cleared my mind bit more.... but why do we want to do this? what harm one can encounter if we instantiate the Service right there Please correct me if I am wrong... Lets say I made a class abc. Now there is another class xyz, thats need a reference to the object of class abc. so instead of doing abc newObject = new abc(); in xyz class.... we define this in XML file amd Spring framework, automatically creates that object when required by xyz class? I am sorry, Asking too much stuff... For example: Open in new window This was a very good example for a beginner like me :-) So this is what whole dependency injection is. Thank you...
https://www.experts-exchange.com/questions/24359100/Inversion-of-Control-and-Dependency-injection.html
CC-MAIN-2018-26
refinedweb
382
53.51
In this chapter, we explored the meat of C++ -- object-oriented programming! This is the most important chapter in the tutorial series. definitions.. a static!). Important note:.. Just a point of curiosity in regard to Quiz 2 and the destructor for the older than C++11 version. I thought that you’d have to create a for-loop to iterate through the char array and wipe the array that way. Is this unnecessary because we are using a destructor and the object will be deleted when the destructor is done doing its thing? Kind of like how it’s not necessary to return m_data to nullptr? When you release (delete) memory back to the operating system, you’re under no obligation to clear its contents first. If you look at the constructor, the only thing different about the C++14, C++11, and older than C++11 version is how the initialization of the value is done. The allocation of memory is the same. Therefore, it makes sense that in the destructor, the deallocation of memory can also be done the same way in each case. Ah ha! Makes sense. Thanks for clearing that up for me. Hi Alex. On Q4 I’m trying to print the card last dealt using this code: However I’m getting a compiler error on the first line; "Error C3867 ‘Deck::dealCard’: non-standard syntax; use ‘&’ to create a pointer to member" I don’t really understand why this is the case. I also previously attempted to print the card by chaining the printCard function by making it return a *this pointer but the compiler also threw up errors with every single const member function. Any ideas? (It may also be worth noting that I have separated the classes into their own headers and .cpp files, but the program works fine without trying to introduce a method to show the card dealt). Should be You want currCard to be a reference to the return value of function dealCard(), not to the dealCard() function itself. Wow, such a simple thing to miss. Thanks. Don’t You Think We should use an anonymous object here? Better, No? I’m not sure what you mean. All you’ve done is remove the temporary variable. There are no anonymous objects here, and I’m not sure why we’d need one. My ultimate suggestion was that it is good to prevent the creation of a temporary variable ! As you also say, we shouldn’t create unnecessary variables 🙂 ! > My ultimate suggestion was that it is good to prevent the creation of a temporary variable ! It’s best to do whatever is clearest for code comprehension. Temporary variables can help document what your code is doing, making it easier to understand. They also make your code easier to debug since you can see intermediate values. The compiler will probably optimize temporary variables away anyway. I probably wouldn’t create a variable called “sum” just to add two integers together (doesn’t really add value) -- but I would create a temporary variable to hold the result of a function call before I do something else with it. Ok Man ! Interesting, I did not know you could have multiple public or private sections in a class. And in problem 3d, what is the point of the empty set of curly brackets? The Monster constructor does everything it needs to do through the member initializer list, so the body of the constructor is empty. This is common for constructors that just do pure initialization. Hello Alex, Regarding: "It is good programming style to put your class *declarations* in a header file of the same name as the class, and define your class functions in…" Consider: "It is good programming style to put your class *definitions* in a header file of the same name as the class, and define your class functions in…" Updated. Thanks for your help in getting these corrected. Hello Alex, on 3f-3h quiz wouldn’t it be better to include srand() and rand() inside the function since they need to be used every time anyways? is there a reason why this should not be done or is it just preference? srand() should only be called once per program, otherwise the results will not be suitably random. If you put it inside the function, it will get called every time. Discarding a random result with each call could, in theory, result in a skewed random distribution… but more likely, it will just result in wasted performance. So, no, you shouldn’t do this. In Quiz 2: In a normal function the *m_data pointer should be set to a null pointer once deleted?, , but inside a class do you need to do so also or you don’t since it’s a private pointer? Thanks for the great work. Generally speaking, you should set your pointers to nullptr after deleting them. However, the one exception is for deleted pointers inside a destructor. Because the entire class object will be destroyed immediately after the destructor finishes, in most cases you don’t need to set your pointers to nullptr since they’ll be destroyed subsequently anyway. Try this A class BlackJack for playBlackJack also. can we create s_names and s_roars as class static member variables? You can, but it’s generally better to define variables in the smallest scope possible. Since generateMonster() is the only user of those fields, it’s better to put it inside generateMonster() rather than at the class level. Alex, why are getRandomNumber and swapCard in the deck class static functions? So you can call them without having to instantiate a Deck object if you want. I understand that to use a structure/class before it is completely defined(declared), we cannot refer to it directly. But we can use pointer to it. In solution 1(b), the declaration of distance() is taking the parameter of type "class Point2d". i.e. it is taking parameter of class type whose definition is still incomplete. Code compiles and works but How does it work? Shouldn’t it give a compilation error ? Member variables must be complete. However, function parameters are allowed to be incomplete, so long as they have been declared. Without this, writing a copy constructor or assignment operator would be impossible. Okay, even then the declaration of Point2d is not complete when it was used as parameter to distance(Point2d) ? Does it require advanced declaration like “class Point2d ; ” ? No, the incomplete class definition serves as an adequate declaration in this case. Hello Alex, How can I define a variable in the class which is only modificable by constructor but not outside the class. The variable must be read-only outside the class. Make the variable a const member variable and ensure it is private. Making it const will mean it’s only initializable by the constructor, but not after that. And making it private will mean only members of the class can access it. If you want outsiders to be able to access it, use an access function. What I want is that it must be read-only outside the class but cannot be changed. Inside the class, any function can change its value. For eg, suppose I create my own string class. It contains an integer value Length. That value(Length) cannot be made const because it changes Everytime I use += operator, = operator, it’s length changes every time but it cannot be modified outside the class. But outside the class, I must have access to it(like const) so that the following line compiles fine: It is not possible to have a member that meets all of the following criteria: * Is directly accessible outside the class (is public) * Is modifiable within the class * Is not modifiable outside the class Hi Alex! I got a question on exercise 3h. My code keeps generating Orcs, but name, hitpoints and roar are always random Here’s my code: EDIT: I changed the type generation line from this to this Now it only generates Skeletons… Many compilers have a bug where the first random number they generate after calling srand() actually isn’t random. You likely can fix your program by calling rand() once (just after the call to srand()) and ignoring the result. I can’t under this: This is covered in lesson 6.12a -- For each loops how does "card" accessed function printCard()? In solution 4a, printCard() is changed from a normal function into a member function of the Card class. card is an object of type Card, so printCard() can be called directly on card. hello, now that iv learned about classes i dont see the point of struct. is there an advantage of using a struct over a class? Also thanks for the great quiz really helpful! Structs can still be useful when you want to package up several different data types into a single container, but you want to treat them as just a data collection, not as an object with behaviors. Hello! As I was working on question 3h), why would we refer to MAX_MONSTER_TYPES by doing Monster::MAX_MONSTER_TYPES? That was the only part I kinda had some trouble on. Maybe it was because I just forgot how enums worked. Enumerators are put in the same scope as the enumeration. In the case where the enum is defined inside a class, the scope of the enum becomes the class, so you need to use the class name as a prefix to access the enumerator. If we’d left MonsterType outside the class, you wouldn’t need the Monster:: prefix. Hey Alex/Watermelon Cat, Doing the 4th question, I have encountered a strange error! Looking at stackexchange for help, it’s said that the compiler is implicitly deleting my constructor. How can I stop it from doing that? When a std::array is created, it needs access to the default constructor of the element type to initialize the array elements to their default state. Does your Card class have a default constructor? If not, add one. Oh, thanks. I did not have a default constructor. I didn’t think to look at the card class since it threw an error at the deck class. Hey Alex! Maybe I am getting it wrong. But in 1(b), why is it that passing second as an argument to other allows its private members to be accessed via a dot operator only? return sqrt((m_x - other.m_x)*(m_x - other.m_x) + (m_y - other.m_y)*(m_y - other.m_y)); } Good question. This is because access specifiers work on a per-class basis, not a per-object basis. Because distanceTo is a member of Point2d, it has access to the private members of ANY Point2d object it can access. In this case, this includes not only the implicit object (this), but also function parameter “other”! Oh yes! Thanks Alex. By the way these tutorials are great,. Helping me a lot. ☺ Hey Alex, do I need to memorize syntax for random number generator, or should I better ask if every software engeneer knows all syntax by heart ? No, you don’t need to memorize the syntax for random number generation. Just know where to look it up when you need it. Any chance to get the solutions separated by cpp/h files for each class? I find it hard to see where to put the includes (I always think I’m duplicating them unnecessarily), and I also get a weird Visual Studio Community 2015 error C2280 in the deck exercise: >deck.cpp(10): error C2280: ‘std::array<Card,52>::array(void)’: attempting to reference a deleted function I have Deck.h: and Deck.cpp: If having separate files is the normal way to do things, I think it would be nice if this long quizzes offered the solution in that form. Thanks! 🙂 my generateMonster function generates everything except the monster type randomly, but the type doesn’t change at all - prints one type again and again. this is my code for the function. Pls help me fix it. thanks 🙂 PS: i use gcc 5.1 well doing this after initialisation of type fixed it type = static_cast<Monster::MonsterType>(getRandomNumber(0, Monster::MAX_MONSTER_TYPE - 1)); type type = static_cast<Monster::MonsterType>(getRandomNumber(0, Monster::MAX_MONSTER_TYPE - 1)); but why didn’t the problem happen to rest of the variables why just the enumerated variable? Not sure -- all you did was change a uniform initialization to a copy initialization. That shouldn’t have had any impact. First of all, thank you for these great tutorials. I always wanted to learn C++ and I am finding your tutorials the perfect balance for my learning speed. I have one question on 1b). In the current posted solution the distanceTo() function is a public member of the Point2d class. It is not declared as friend (not at 1b). Inside that function the code reads other.m_x, where ‘other’ is a Point2d argument, i.e. an object of class Point2d. How can we possibly access the m_x member of object ‘other’, considering that m_x is a private member? This is something that a lot of people miss or mistake -- access controls work on a per-class basis, not a per-object basis. Because distanceTo() is a member function of the Point2d class, it has access not only to the private members of the implicit this object, it also has access to the private members of any other object of type Point2d that it has access to (which in this case is parameter other). I’ve updated the lesson on access controls accordingly. Thanks Alex. It is clear now after reading your answer and your update to 8.3 When I tried to do 1b I first used an access function getX() and getY() because I thought that was the only way to get it working and stay "encapsulated". Now that I understand this "per-class basis" behaviour it makes lots of sense, precisely to avoid the need of these extra access functions. By the way, I think there are some formatting issues with that update. Some of the explanation text is formatted as the code pieces. In quiz #2, shouldn’t the destructor delete the memory allocated by the constructor? Yes. The quiz question is asking you to write the destructor yourself. Hi Alex, I tried to answer question 4 first on my own, and ended up with 3 classes and header files with matching source files (Card.cpp, Deck.cpp, Game.cpp) My main.cpp code: #include <iostream> #include "Deck.h" #include "Game.h" int main() { Deck myDeck; for (;;) { Game::prepare(myDeck); if (Game::playBlackjack(myDeck)) std::cout << "You win!n"; else std::cout << "You lose!n"; if (!Game::restartGame()) { break; } } return 0; } All 3 Game functions are static, and "Game:prepare(Deck& deck)" consists of "m_deck.shuffle()" and "srand()". How efficient/inefficient do you think this code is? I ask because I noticed your solution didn’t include any custom header files… (PS- I am not sure how to use the ) Seems fine to me. I didn’t use multiple files just to keep the solution code all in one place, but if I’d been writing my own code, I would have split them like you did. Doesn’t seem inefficient at all, and the structure of your main function is nice and straightforward. I tried doing the blackjack game by having each class in seperate .h and .cpp files but I’m running into issues mainly with my functions not being detected in other classes. Even when I make them a friend, it doesn’t work. For example, I can’t use the printCard function in my Deck class so I ended up making the printCard function static. So I didn’t follow the instructions because of those issues but my game still works. Also, for question 3i, does making it static in this case mean the array is not destroyed when it goes out of scope? I mean do you mean static like how its used in section 4.3 and not the static definition when it comes to classes? For 3i, those variables will get initialized at program startup and destroyed when the program terminates. They do not get destroyed when they go out of scope. So yes, these are exactly the use case that’s talked about in lesson 4.3, not static member variables. As for why you can’t call your function, you probably need forward declarations for your functions so the other files know they exist. You can either do that manually (by adding forward declarations to the top of the files that need them), or by putting the forward declarations in a header file and including the header (better). Hi Alex, I have a question on 3h) I tried writing my code and it was similar to yours with the exception that I declared my s_roar[6] and s_name[6] outside of my generateMonster() function. I got an error in VisualStudio, error C2864 stating that "a static data member with an in-class initializer must have non-volatile const integral type" after some googling it seems that volatile values are ones that can be modified by outside sources. Since I made s_name and s_roar static and there are no functions that change it how is it considered volatile? The problem was fixed as soon as I moved s_roar and s_name inside the generateMonster() function, then it compiled just fine and I was able to run it. What exactly was the change that made it functional? The issue here isn’t that you defined s_roar and s_name as static members, but that you tried to initialize them directly. Static members of a class can only be _directly_ initialized with integral values or enumerated values. For initialization with other values, you need to do that outside of the class, like so: This was mentioned in the lesson on static members, but upon reflection it’s maybe not obvious enough. I’ll put it in its own subsection, so it stands out a little more. In your solution above, you initialized s_roar and s_name within the generateMonster function, which is still within the MonsterGenerator class. in the static member variables/functions lessons, you initialized static members completely outside the class. Which of these practices is correct? They’re both correct. s_roar and s_name are local static variables. They’re initialized inside the function to keep the scope limited to that function. Local static variables are initialized wherever they are defined. In the lesson on static members, they’re initialized outside the class because static members must be initialized outside the class, in the global scope. This is necessary because that static value is accessible anywhere (even without an object of the class). I believe this code would be way simplier to understand instead of yours. 🙂 Anyway, great job with these tutorials. They are the best i could find in the internet and i did check a looot! Easier, yes, but also flawed. Your suggested code produces an uneven distribution, where low numbers may be generated more often than high numbers. If min is 0 and max is 10000, for example, the numbers between 0 and 2767 will be generated 4 times for every 3 times the numbers between 2768 and 9999 are generated. Hey Alex,.
http://www.learncpp.com/cpp-tutorial/8-15-chapter-8-comprehensive-quiz/comment-page-2/
CC-MAIN-2018-05
refinedweb
3,231
63.9
In the ecosystem of Haskell, a number of stream processing libraries has been made. The very purpose is to process a sequence of values with effects, in a composable manner. Still, I was not satisfied with the sets of features of the existing packages. Accordingly, I decided to make a new one. It's called drinkery. This package achieves following features: - Sequential producer: You can build a producer using a monadic action like yield :: s -> Producer s (). - ListT done right: Correct implementation of a list monad transformer. - Monadic consumer: A consumer monad processes a stream involving effects. - Transducer: A mechanism that transforms streams. - Full duplex: Downstream can send a value to the upstream. - Decent performance Hello, world import Data.Sinky import qualified Data.Sinky.Finite as D main = tapListT' (taste "0H1e2l3l4o5,6 w7o8r9ld!\n") +& D.filter (not . isDigit) $& traverseFrom_ consume (liftIO . putChar) Sequential producer Most libraries offers monads where you can emit a value in a computation; Producer for pipes, ConduitM for conduit, PlanT for machines. Sinky employs a Producer which manipulates a Tap, non-terminating source. The type parameter r is a request type that grants full-duplexity; assume () for now. newtype Producer r s m a = Producer { unProducer :: (a -> Tap r s m) -> Tap r s m } newtype Tap r s m = Tap { unTap :: r -> m (s, Tap r s m) } As well as other implementations, yield emits one element. yield :: (Monoid r, Applicative m) => s -> Producer r (Maybe s) m () Producer can be converted to a Tap: runProducer :: (Monoid r, Applicative m) => Producer r (Maybe s) m a -> Tap r (Maybe s) m One big difference from other libraries is the explicit use of Maybe to have an end of a stream. This allows you to omit Maybe and use a specific value instead (e.g. empty bytestring). ListT done right It's a well-known fact that transformer's ListT is not a monad transformer. A proper implementation is convenient for writing nested loops. Today, several libraries implement ListT. The essentials are: - (a) A monad transformer T: T mis a monad for any monad m. - (b) An Alternativeinstance: You can convert a list into an action by asum . map pure. - (c) A way to draw afrom T m a. drinkery implements it as a Boehm-Berarducci encoded list. (a) is satisfied because ListT r m is a monad regardless of m. Also There is an alternative instance (b). A specialised function is defined for convenience: newtype ListT r m s = ListT { unListT :: forall x. (s -> Tap r x m -> Tap r x m) -> Tap r x m -> Tap r x m } sample :: Foldable f => f s -> ListT r m s You can turn it into a Tap. runListT :: (Monoid r, Applicative m) => ListT r m s -> Tap r (Maybe s) m Benchmark time. This benchmark goes through a triple of loops where inner loops depend on the outer one. sourceAlt :: Monad m => ([Int] -> m Int) -> m Int sourceAlt k = do a <- k [1..50] b <- k [1..a] c <- k [1..b] return $! a + b + c Thanks to the encoding, ListT doesn't impose a slowdown. In fact, it's the fastest implementation! drain/drinkery/Producer mean 304.7 μs ( +- 11.48 μs ) drain/drinkery/ListT mean 304.7 μs ( +- 24.92 μs ) drain/pipes/Producer mean 372.9 μs ( +- 17.91 μs ) drain/pipes/ListT mean 770.3 μs ( +- 75.69 μs ) drain/list-t mean 5.332 ms ( +- 393.9 μs ) drain/ListT mean 23.69 ms ( +- 1.331 ms ) Monadic consumer Monadic consumption is one of the important abilities of a stream processing library (there's an exception like streaming, though). The most common implementation is called iteratee. machines, pipes, and conduit have some additions on it. newtype Iteratee s m a = Iteratee { runIteratee :: m (Either (s -> Iteratee s m a) a) } On the other hand, drinkery's consumer is dissimilar. newtype Sink t m a = Sink { runSink :: t m -> m (a, t m) } consume :: (Monoid r, Monad m) => Sink (Tap r s) m s The first type parameter represents the source (usually Tap r s). A Sink action is a function that consumes a tap and returns the remainder. You can just apply runSink to feed a tap. Several combinators are defined to work with finite streams. Their first argument is usually consume. foldlFrom' :: (Monad m) => m (Maybe a) -> (b -> a -> b) -> b -> m b foldMFrom :: (Monad m) => m (Maybe a) -> (b -> a -> m b) -> b -> m b traverseFrom_ :: (Monad m) => m (Maybe a) -> (a -> m b) -> m () drainFrom :: (Foldable t, Monad m) => m (Maybe a) -> m () Since it operates on a Tap you can push an element back: leftover :: (Monoid r, Monad m) => s -> Sink (Tap r s) m () Fanout Sometimes we want to distribute an input stream to multiple consumers. This is not possible with Sink itself and cloning a tap is not trivial either. For this purpose, drinkery offers a classic iteratee: newtype Awaiter s m a = Awaiter { runAwaiter :: m (Either (s -> Awaiter s m a) a) } await :: Monad m => Awaiter s m s serving_ combines a list of Awaiters to one. serving_ :: Monad m => [Awaiter s m a] -> Awaiter s m () It can be converted into a Sink. iterAwaiterT consume :: Awaiter s m a -> Sink s m a Taste & compare Not defined in the package yet, but Sink can simultaneously consume two streams through the Product type. It should also be possible to manipulate an extensible record of taps. drinkL :: (Monoid r, Monad m) => Sink (Product (Tap r s) tap) m s drinkL = drinking $ \(Pair p q) -> fmap (`Pair`q) <$> unTap p mempty Multi-stream is a rare feature. Only machines supports it as far as I know, but it probably won't be long till drinkery gains it in a more flexible way. Transducer A stream transducer receives an input and produces zero or more outputs. There are three ways to represent a stream transducer. - Concrete structure which can consume and produce values (e.g. machines, pipes, conduit) - A stream producer where the base monad is a consumer (iteratee) - A function from a stream producer to another stream producer (streaming) drinkery took the second approach. Distiller tap r s m is a tap which consumes tap. type Distiller tap r s m = Tap r s (Sink tap m) Surprisingly, the only combinator introduced is (++$), the composition of a tap and a distiller. (++$) :: (Functor m) => tap m -> Distiller tap r s m -> Tap r s m Since Distiller is a special case of a tap, you can feed a drinker a distiller, and you can also connect two distillers using (++$). Note that the drinker also has an access to the input of the distiller, allowing it to send a request. runSink :: Sink (Tap r s) (Sink tap m) a -> Distiller tap m r s -> Sink tap m a Full-duplexity One distinctive feature of pipes is Proxy, the base type, having four parameters: data Proxy a' a b' b m r = Request a' (a -> Proxy a' a b' b m r ) | Respond b (b' -> Proxy a' a b' b m r ) | M (m (Proxy a' a b' b m r)) | Pure r This allows a producer to receive a value of type b', and does a consumer to send a'. This interactivity is useful for handling seeking. However, pipes' composition operator fixes the request type to (): (>->) :: Monad m => Proxy a' a () b m r -> Proxy () b c' c m r -> Proxy a' a c' c m r You need to resort to one of the special combinators, (+>>). A sad fact is that Proxy cannot accumulate requests for one input. You would have to define some custom function. (+>>) :: Monad m => (b' -> Proxy a' a b' b m r) -> Proxy b' b c' c m r -> Proxy a' a c' c m r The good old iteratee's composition does propagate requests, but in a rather disappointing fashion: the type of requests is SomeException. Honestly, iteratee's combinators and their semantics are quite puzzling. newtype Iteratee s m a = Iteratee{ runIter :: forall r. (a -> Stream s -> m r) -> ((Stream s -> Iteratee s m a) -> Maybe SomeException -> m r) -> m r} Other libraries don't support bidirectionality at all. As we've seen in the first section, drinkery's Tap ( Producer and ListT likewise) has an extra parameter for reception. You can send requests by calling request. request :: (Monoid r, Monad m) => r -> Sink (Tap r s) m () Producer or ListT are able to receive orders from the drinker, flushing the pending requests. accept :: Monoid r => Producer r s m r inquire :: Monoid r => ListT r m r Of course composition doesn't take this ability away. Resource management Conduit has a unique mechanism for resource management which makes it a respectably practical library; you can attach a finaliser to a stream producer. addCleanup :: Monad m => (Bool -> m ()) -> ConduitM i o m r -> ConduitM i o m r In drinkery, you can create an instance of CloseRequest to finalise a tap. (+&), a specialised version of runSink, closes the tap as soon as the drinker finishes. class CloseRequest a where -- | A value representing a close request closeRequest :: a instance CloseRequest r => Closable (Tap r s) (+&) :: (Closable tap, Monad m) => tap m -> Sink tap m a -> m a Performance I benchmarked a composition of two scanning operations scan (+) 0 D.++$ scan (+) 0 processing 22100 elements. scan-chain/drinkery/++$ mean 1.717 ms ( +- 104.5 μs ) scan-chain/drinkery/$& mean 1.239 ms ( +- 110.7 μs ) scan-chain/pipes mean 1.210 ms ( +- 78.40 μs ) scan-chain/conduit mean 1.911 ms ( +- 97.84 μs ) scan-chain/machines mean 2.731 ms ( +- 176.9 μs ) It's quite good. Note that there are two ways to compose distiller: ++$ attaches to a tap or a distiller, and $& attaches to a drinker. The latter seems to be faster. Notably, a single scan is significantly faster than the rivals: scan/drinkery mean 534.6 μs ( +- 58.07 μs ) scan/pipes mean 736.7 μs ( +- 54.84 μs ) scan/conduit mean 862.3 μs ( +- 68.02 μs ) scan/machines mean 1.352 ms ( +- 84.82 μs ) Conclusion drinkery offers a significantly greater flexibility without losing speed. The API is not complete; I plan to add a lot more combinators in the near future. I'm looking forward to your patronage.
https://www.schoolofhaskell.com/user/fumieval/drinkery-the-boozy-streaming-library
CC-MAIN-2021-43
refinedweb
1,723
65.73
In general, process namespaces are useful for: 1. silly marketing (see Sun and FreeBSD) 2. the very obscure case of "root" account providers who are too clueless to use SE Linux or Xen I don't think either case justifies the complexity. I am not looking forward to the demands that I support this mess in procps. I suspect I am not alone; soon people will be asking for support in pstools, gdb, fuser, killall... until every app which interacts with other processes will need hacks. If the cost were only an #ifdef in the kernel, there would be no problem. Unfortunately, this is quite a hack in the kernel and it has far-reaching consequenses in user space.
http://lkml.iu.edu/hypermail/linux/kernel/0608.1/3015.html
CC-MAIN-2019-39
refinedweb
119
73.58
do you already have the old XML format, and only transformation is required, to the new format? if yes, I would try to do this with XSLT. if you are creating the new XML, as part of some program, where use of Xerces-J is mandatory, then we must find way to solve this problem with Xerces-J and an API like DOM (which looks like, you are using). I think, it would be good, if you can pls share some of the core logic you have written using Xerces-J. I think, that would give us some insight, about what could be wrong with the logic you have written. On Thu, Aug 20, 2009 at 2:27 PM, juho<j.houllier@gmail.com> wrote: > > Hello, > > I want to add namespaces declaration on parent if child and parent have > different namespaces. > My wish is to have output like that: > > <balise1 xmlns: > <tec:balise2 /> > <tec:balise3 /> > </balise> > > instead of what i have actually > <balise1> > <tec:balise2 xmlns: > <tec:balise3 xmlns: > </balise1> -- Regards, Mukul Gandhi --------------------------------------------------------------------- To unsubscribe, e-mail: j-users-unsubscribe@xerces.apache.org For additional commands, e-mail: j-users-help@xerces.apache.org
http://mail-archives.apache.org/mod_mbox/xerces-j-users/200908.mbox/%3C7870f82e0908200409u242038acq262b95b99d1ed224@mail.gmail.com%3E
CC-MAIN-2014-23
refinedweb
194
77.67
Summary: Learn how to use Microsoft Visual C# and Visual Studio 2005 to develop form regions for Outlook 2007. The sample Visual C# Outlook COM add-in and form region show you how to create Internet headers as an adjoining region. (22 printed pages) Ryan Gregg, Microsoft Corporation June 2006 Applies to: Microsoft Office Outlook 2007, Microsoft Visual Studio 2005 Download Outlook 2007 Add-In: Form Region Add-In Outlook 2007 Sample: Visual Studio 2005 Templates Contents Overview of Outlook 2007 Form Regions Form Regions Using a Custom Template in Visual Studio 2005 Designing a Form Region Hooking Up an Add-in to a Form Region Creating a Setup Project Conclusion Additional Resources Microsoft Office Outlook 2007 introduces a new form technology called form regions. Form regions overcome the limitations of customizing Outlook forms by using form pages in a variety of new ways. For example, form regions can be displayed in the Reading Pane, and can be added to standard forms without creating a derived message class. Although form regions do not support embedded script, business logic for a form region is written as an Outlook COM add-in and linked with the form region. This article covers the creation of a sample Outlook add-in that displays transport header information in a form region on e-mail items. You can create and run forms with form regions without a COM add-in, but using a COM add-in has the benefit of supporting custom business logic or advanced functionality in the form regions.. In previous versions of Outlook, third parties who wanted to extend the Outlook user interface (UI) to include custom functionality were restricted by limited support for custom forms in Outlook. Often, when you wanted to add a few fields or make a small modification to the form, Outlook required you to redesign the entire form. In Outlook 2007, form regions allow more ways to add UI to standard forms and simplify the process of designing custom UI. Through an adjoining form region, you can extend any existing form with additional fields or controls. These adjoining form regions are displayed at the bottom of the first page of a form, and each adjoining form region is collapsible. You can also add a separate form region, which is displayed as a full additional form page and can appear on any existing standard form or custom form. In addition to adding new UI, you can use a form region to replace existing UI in standard forms and create a new form for a derived message class, as shown in Figure 1. You can register the form region as a replacement form region: If you specify a value of Replace for the <formRegionType> tag in the form region manifest XML file, the form region will replace the default page of the form. If you specify a value of ReplaceAll for the <formRegionType> tag, the form region will replace the entire form, resulting in a new form for a derived message class. Another benefit of using form regions is that form regions support visual themes. In the past, you were required to use custom Microsoft ActiveX controls or advanced workarounds to enable a themed look to custom forms. With form regions, all of the controls that come with Outlook 2007 inherit the Microsoft Windows theme. Microsoft Visual Studio 2005 continues to have built-in support for developing Office add-ins through the Shared add-in template, but this template is missing some key blocks of code that are useful for writing Outlook 2007 COM add-ins in managed code. Although using Microsoft Visual Studio Tools for the Microsoft Office System would be the preferred development approach, it is currently incompatible with Office 2007 applications. In the meantime, you can download the easy-to-use templates that encapsulate the details of the IDTExtensibility2 interface for Outlook add-ins (shown earlier). Download OutlookAddinTemplates.msi. Double-click the file to start the setup wizard. Complete the setup steps. The template files are automatically installed to the proper locations. Now that the custom template is installed, proceed by creating an add-in project using the template. In Visual Studio 2005, press CTRL+SHIFT+N to display the New Project dialog box, as shown in Figure 2. In the Project Types list, click your preferred language (Visual Basic or Visual C#). In the Templates list, select Office Outlook 2007 Add-in from the My Templates group. Name the project InternetHeaderAddin and click OK. Visual Studio 2005 then uses the custom template to generate a default add-in project for Outlook 2007. This project contains several files that form the basis of an Outlook add-in. Because you know this add-in will contain a form region, create a folder to store the form region files in your project. To create a new folder in the project, right-click the project in Solution Explorer, point to Add, and then click New Folder. Name the folder Regions. Now that you have created an add-in project and created a location in which to store your form region, you can switch to Outlook and start designing the form region layout. Before you start writing the business logic behind a form region, you should design the layout of the form and define all of the controls. You will use Outlook to design your new form region layout. Start Outlook 2007. On the main menu, point to Tools, click Forms, and then click Design a Form. Select Message, and then click Open, as shown in Figure 3. In the Design group, click Form Region, and then click New Form Region, as shown in Figure 4. Outlook creates a tab in the Forms Designer titled "(Form Region)." This tab is now a new form region design surface that you will use to design the form region, saving it as an Outlook Form Storage (.ofs) file. For this solution, you need one text box and one button on the form. Display the Control Toolbox by going to the Design group of the Ribbon UI and clicking Control Toolbox. Right-click the Toolbox window, and select Custom Controls. Scroll through the list of controls and select Microsoft Office Outlook Command Button Control and Microsoft Office Outlook TextBox Control, and then click OK, as shown in Figure 5. Drag a text box and a command button from the Toolbox to the form design surface. Arrange the controls to look like the example in Figure 6. To adjust the properties of each control, including the control name and caption, right-click each control and select Properties. For each control, keep the default settings and adjust the properties accordingly: Text box control: Name: TextBoxHeaders Multiline: True Read-only: True Horizontal Layout: Grow/shrink with Form Command button control: Name: BtnCopy Caption: &Copy To save the form region, go to the Design group of the Developer tab, click Form Region, and then click Save Form Region As. Browse to the location of your add-in project and open the Regions folder that you created earlier. Save the new form region in this folder as InternetHeaderAdjRegion.ofs, and close the window. When Outlook prompts you to save the changes to the item underlying the designer, click No. Now that the design of the form region is complete, you can include the .ofs file in your add-in project and author the XML manifest for the form region. The XML manifest includes information about the form region and how it is displayed to the user. In the Visual Studio 2005 add-in project created earlier, right-click the Regions folder, point to Add, and then click Existing Item. Change the Files of Type selector to indicate All Files. Select the InternetHeaderAdjRegion.ofs file, and click Add. Next, you need to author the XML manifest, which indicates to Outlook the details about this form region. Right-click the Regions folder, point to Add, and click New Item. Select XML File, type InternetHeaderAdjRegion.xml for the file name, and then click Add. Create the manifest file according to the Form Region XML schema, for example: <?xml version="1.0" encoding="utf-8" ?> <FormRegion xmlns=""> <name>InternetHeaderDisplay</name> <formRegionType>adjoining</formRegionType> <title>Internet Headers</title> <showInspectorRead>true</showInspectorRead> <showReadingPane>true</showReadingPane> <showInspectorCompose>false</showInspectorCompose> <addin>InternetHeaderAddin.Connect</addin> </FormRegion> For more information about the XML schemas for form regions, see the sections "Form Region Manifest" and "Form Region Localization Manifest" in the 2007 Office System: XML Schema Reference. Each element in the XML file defines an aspect of how Outlook displays the form region: <name> defines the internal name used by the add-in to refer to this form region. <title> defines the display text used in the adjoining region header. <formRegionType> defines the type of region, in this case an adjoining region. <showInspectorRead> defines whether this region will be shown for read inspectors. <showReadingPane> defines whether this region will be displayed in the Reading Pane. <showInspectorCompose> defines whether this region will be displayed for compose inspectors. <addin> indicates the ProgID of the add-in that this form region is linked with. To properly register a form region with Outlook, a registry value needs to be added for each form region. This key can go in either HKEY_LOCAL_MACHINE or HKEY_CURRENT_USER, based on the solution requirements. For this sample, add a registry key to HKEY_CURRENT_USER. Open regedit.exe by clicking Start, and then clicking Run. Type regedit.exe, and then click OK. Browse to HKEY_CURRENT_USER\Software\Microsoft\Office\Outlook. If the FormRegions key does not exist, create a new key in this path. Expand the FormRegions key, and check for a key named IPM.Note. If this key does not exist, create it. You might notice other keys, such as IPM.Note.Microsoft.Exchange.Voice. These keys are part of a built-in solution for Exchange Unified Messaging and should not be modified. Add a new string value under the IPM.Note key, named InternetHeaderAdjRegion. Double-click the value name, and type the full path of the manifest XML file, such as C:\Source\InternetHeaderAdjRegion.xml, and then click OK. Ensure that the path is a local path. Outlook ignores UNC-based paths for form regions for performance reasons. Now that you have completed the form region design and created a manifest describing the form region, you are ready to hook up your add-in to work with this form region. When a form region manifest includes the <addin> element, Outlook looks for a COM add-in with the same ProgID and checks to see if the same class that implements the IDTExtensibility2 interface also implements the FormRegionStartup interface. If this interface is implemented, Outlook calls on that interface to obtain the layout information and to provide a reference to the form region when it is displayed. Because form regions are based on the Microsoft Forms 2.0 types, you must add a reference in the add-in project to this type library. In Solution Explorer, right-click the project node, and then click Add Reference. In the Add Reference dialog box, click the COM tab, as shown in Figure 7. Scroll to find Microsoft Forms 2.0 Object Library, select it, and click OK. Visual Studio adds a reference to Microsoft.Vbe.Interop.Forms in the project references. Next, add a reference to System.Windows.Forms so that you can access the Clipboard for the Copy button on the form. In Solution Explorer, right-click the project node and click Add Reference. In the Add Reference dialog box, click the .NET tab. Scroll to System.Windows.Forms and click OK. Visual Studio will add a reference to System.Windows.Forms to the project. To track multiple instances of a form region, you need a class that holds on to the state information for each form region. The add-in creates a new instance of this class each time a form region is opened, and destroys each instance when the form region is closed. In the add-in project, add a new class named InternetHeaderAdjRegion to your project: In Solution Explorer, right-click the project node. Click Add, and then click Class. Type the name of the new class, InternetHeaderAdjRegion, and then click Add. Inside the new class, you will define five variables to maintain state, and one event that will be used to communicate back to the parent class that the form region has been closed and the references should be cleaned up. You will also add a constructor that takes a FormRegion object, event handlers for the Region.Close event and BtnCopy.Click event, and two helper functions, GetTransportHeaders and ShowRegionOnItem that will help retrieve the information you need to display and determine if the form region should be shown on an item. To create aliases for the namespace references that will be used in the project, include the following code block at the top of the file. using System; using System.Collections.Generic; using System.Text; using Outlook = Microsoft.Office.Interop.Outlook; using Forms = Microsoft.Vbe.Interop.Forms; Imports Outlook = Microsoft.Office.Interop.Outlook Imports Forms = Microsoft.Vbe.Interop.Forms Inside the class, define the instance variables and the event as follows: class InternetHeaderAdjRegion { // Instance variables to maintain state. private Outlook.FormRegion Region; private Forms.UserForm Form; private Outlook.OlkTextBox TextBoxHeaders; private Outlook.OlkCommandButton BtnCopy; private string Headers; // Public event for when the region closes. public event EventHandler Closed; } Public Class InternetHeaderAdjRegion ' Instance variables for state. Private WithEvents Region As Outlook.FormRegion Private Form As Forms.UserForm Private TextBoxHeaders As Outlook.OlkTextBox Private WithEvents BtnCopy As Outlook.OlkCommandButton Private Headers As String ' Public event for when the region closes. Public Event Closed As EventHandler End Class Next, you need to hook up references for all of the objects we care about. To access the controls on the form, define a new variable for the Forms 2.0 UserForm object. This is returned when you get the value of the Form property on the FormRegion object. From this, the add-in obtains references to each individual control on the form by accessing the Controls collection on the Form object. Then, hook up event handlers to the Region.Close event and the BtnCopy.Click event. Set the Text property of the text box control to the value of the transport headers, so that they are displayed to the user. All of this work happens in the constructor for this class. You define the GetTransportHeaders method later on. public InternetHeaderAdjRegion(Outlook.FormRegion region) { Region = region; Region.Close += new Outlook.FormRegionEvents_CloseEventHandler(Region_Close); Form = (Forms.UserForm)region.Form; Headers = GetTransportHeaders(Region.Item); TextBoxHeaders = (Outlook.OlkTextBox)Form.Controls.Item("TextBoxHeaders"); TextBoxHeaders.Text = Headers; BtnCopy = (Outlook.OlkCommandButton)Form.Controls.Item("BtnCopy"); BtnCopy.Click += new Outlook.OlkCommandButtonEvents_ClickEventHandler(BtnCopy_Click); } Public Sub New(ByVal Region As Outlook.FormRegion) Me.Region = Region Form = DirectCast(Region.Form, Forms.UserForm) TextBoxHeaders = DirectCast(_ Form.Controls.Item("TextBoxHeaders"), Outlook.OlkTextBox) BtnCopy = DirectCast(_ Form.Controls.Item("BtnCopy"), Outlook.OlkCommandButton) Headers = GetTransportHeaders(Region.Item) TextBoxHeaders.Text = Headers End Sub Next, add the code behind the event handlers defined in the constructor. For the BtnCopy_Click event handler, copy the current headers to the Clipboard. Because the .NET Framework provides System.Windows.Forms.Clipboard, use this class to copy the headers. The SetText method will set the Clipboard value to a given string. A try/catch block is used to make sure that no errors are thrown back to Outlook if this method fails. void BtnCopy_Click() { try { System.Windows.Forms.Clipboard.SetText(Headers); } catch { } } Private Sub BtnCopy_Click() Handles BtnCopy.Click Try System.Windows.Forms.Clipboard.SetText(Headers) Catch End Try End Sub The second event handler to hook up is for Region_Close. Outlook raises this event when the form region is closed, either because the Inspector window closed or the Reading Pane is no longer displaying the item. In this event handler, you want to clean up all the state references, unhook the event handlers, and raise the Closed event to notify the parent about the change. void Region_Close() { // Unhook events. BtnCopy.Click -= new Outlook.OlkCommandButtonEvents_ClickEventHandler(BtnCopy_Click); // Remove references. BtnCopy = null; TextBoxHeaders = null; Headers = null; Form = null; Region = null; // Fire event to the parent. if (Closed != null) Closed(this, EventArgs.Empty); } Private Sub Region_Close() Handles Region.Close ' Remove our references. BtnCopy = Nothing TextBoxHeaders = Nothing Headers = Nothing Form = Nothing Region = Nothing ' Raise the closed event. RaiseEvent Closed(Me, EventArgs.Empty) End Sub You already used GetTransportHeaders in the constructor, so you need to define what this method does. GetTransportHeaders uses the new Outlook PropertyAccessor object to read the PR_TRANSPORT_MESSAGE_HEADERS property from the item. This property is not exposed through the Outlook object model, but the PropertyAccessor object enables you to read it without using another API. Because this helper method does not reference any of the instance variables, it is defined as static. A try/catch block is again used to catch any errors and to fail gracefully. You also add an internal method that will be called from the Connect class to determine whether or not to show the form region. The code determines this based on whether there are any transport headers defined. static string GetTransportHeaders(object Item) { const string PR_TRANSPORT_MESSAGE_HEADERS = ""; try { OutlookItem olkItem = new OutlookItem(Item); Outlook.PropertyAccessor propAccessor = olkItem.PropertyAccessor; string internetHeaders = (string)propAccessor.GetProperty( PR_TRANSPORT_MESSAGE_HEADERS); return internetHeaders; } catch { return null; } } internal static bool ShowRegionOnItem(object Item) { string headers = GetTransportHeaders(Item); if (headers == null || headers.Length == 0) return false; else return true; } Public Shared Function GetTransportHeaders(ByVal Item As Object) As String Try Dim olkItem As New OutlookItem(Item) Dim propAccessor As Outlook.PropertyAccessor = _ olkItem.PropertyAccessor Dim internetHeaders As String = _ CStr(propAccessor.GetProperty(_ "")) Return internetHeaders Catch Return Nothing End Try End Function Shared Function ShowRegionOnItem(ByVal Item As Object) As Boolean Dim headers As String = GetTransportHeaders(Item) Return Not String.IsNullOrEmpty(headers) End Function For more information about the PropertyAccessor object, see What's New for Developers in Outlook 2007 (Part 2 of 2) and the Outlook 2007 Developer Reference, available on MSDN. Now that you have a class to maintain the state of a single form region instance, you can use this class and a collection in the Connect class to maintain the state of multiple form regions over the lifetime of the add-in. Now you can start hooking up the FormRegionStartup interface and making the form region solution come alive. To begin, open the Connect class from the add-in project. This class was created by the Outlook add-in template and is the class that implements the IDTExtensibility2 interface. To enable this add-in to work with form regions, you need to also implement the FormRegionStartup interface by adding the interface to the class definition. /// <summary> /// The object that implemented the Outlook add-in interface. /// The additional logic that implements IDTExtensibility2 is contained /// in Connect.Designer.cs /// </summary> /// <see also public partial class Connect : Outlook.FormRegionStartup { /// Leave existing code intact. ... #region FormRegionStartup Members void BeforeFormRegionShow(Outlook.FormRegion FormRegion) { } object GetFormRegionStorage(string FormRegionName, object Item, int LCID, Outlook.OlFormRegionMode FormRegionMode, Outlook.OlFormRegionSize FormRegionSize) { } #endregion } ''' <summary> ''' The object that implemented the Outlook add-in interface. ''' The additional logic that implements IDTExtensibility2 is contained ''' in Connect.Designer.vb ''' </summary> ''' <remarks></remarks> Partial Public Class Connect Implements Outlook.FormRegionStartup ' Keep existing code intact. ... #Region "FormRegionStartup members" Public Sub BeforeFormRegionShow(FormRegion As _ Outlook.FormRegion) _ Implements Outlook._FormRegionStartup.BeforeFormRegionShow End Sub End Function #End Region End Class The two methods defined in this interface are called as Outlook loads an item and prepares to show a form. When Outlook starts to load the UI. Later, just before Outlook displays the form region to the user, it calls the BeforeFormRegionShow method, and passes a reference to the form region object. First write the code for the GetFormRegionStorage method. The best practice here is to be able to embed the .ofs file in your add-in so that it cannot be modified by the user or other programs. One of the formats accepted by Outlook is a byte array, so you can use the .NET Framework resource system to embed the .ofs file in your add-in: Right-click the project in Solution Explorer, and select Properties. Click the Resources tab, and then click the hyperlink to create a default resource file if you have not yet created a resource file for the project. Press Ctrl+5 to show file-based resources. Drag InternetHeaderAdjRegion.ofs from Solution Explorer into the resource list view to add it as a resource. Save the project settings and close the project tab. Now, in the GetFormRegionStorage method in the Connect class, you make a call to ShowRegionOnItem to check whether to show the region. If the code finds that the message contains transport headers, it will return a byte array containing the contents of the .ofs file. If there are no headers, it returns Null, which indicates to Outlook not to load the form region for this item. public object GetFormRegionStorage(string FormRegionName, object Item, int LCID, Outlook.OlFormRegionMode FormRegionMode, Outlook.OlFormRegionSize FormRegionSize) { if (FormRegionName == "InternetHeaderDisplay" && InternetHeaderAdjRegion.ShowRegionOnItem(Item)) { // Return the storage only when there are headers. return Properties.Resources.InternetHeaderAdjRegion; } return null; } If FormRegionName = "InternetHeaderDisplay" AndAlso _ InternetHeaderAdjRegion.ShowRegionOnItem(Item) Then ' Return the storage only when there are headers Return My.Resources.InternetHeaderAdjRegion End If Return Nothing End Function Visual Studio 2005 automatically generates the Properties.Resources.InternetHeaderAdjRegion (or My.Resources.InternetHeaderAdjRegion) hierarchy based on the resources defined in the project. This provides a byte array exactly as Outlook expects. If you were to compile and run the project as it is right now, you would see the form region appear on all IPM.Note items that have transport headers. However, no data would appear in the text box and the button would not have any action. You still need to hook up the state management class to the instance of a form region. You can do this using the BeforeFormRegionShow method of the FormRegionStartup interface. To track multiple open form regions (if there are multiple Inspectors or Explorer windows), you need to maintain a collection of the form region instances. To do this, you use a list collection, a .NET generic collection class, typed to your state management class, InternetHeaderAdjRegion. First, define the collection as an instance variable in your class. Put this code block at the top of Connect class: private List<InternetHeaderAdjRegion> OpenRegions = new List<InternetHeaderAdjRegion>(); Private OpenRegions As New List(Of InternetHeaderAdjRegion) Next, write some code in the BeforeFormRegionShow method to create a new instance of the InternetHeaderAdjRegion, and pass it the form region that is about to be shown. The state management class will then initialize the region and set the Text property of the text box to your transport headers. At the same time, a new event handler is created to handle the Closed event on the state management class. Remember, this event is raised when the form region is no longer being displayed to the user, and you should clean up the reference. In this case, you simply remove the instance of the class from the OpenRegions collection and let garbage collection take care of the rest. public void BeforeFormRegionShow(Outlook.FormRegion FormRegion) { if (FormRegion.InternalName == "InternetHeaderDisplay") { InternetHeaderAdjRegion newRegion = new InternetHeaderAdjRegion(FormRegion); newRegion.Closed += new EventHandler(Region_Closed); OpenRegions.Add(newRegion); } } void Region_Closed(object sender, EventArgs e) { OpenRegions.Remove(sender as InternetHeaderAdjRegion); } Public Sub BeforeFormRegionShow( ByVal FormRegion As _ Microsoft.Office.Interop.Outlook.FormRegion) Implements _ Microsoft.Office.Interop.Outlook.FormRegionStartup._ BeforeFormRegionShow If FormRegion.InternalName = "InternetHeaderDisplay" Then Dim newRegion As New InternetHeaderAdjRegion(FormRegion) AddHandler newRegion.Closed, AddressOf Region_Closed OpenRegions.Add(newRegion) End If End Sub Private Sub Region_Closed(ByVal sender As Object, ByVal e As EventArgs) OpenRegions.Remove(DirectCast(sender, InternetHeaderAdjRegion)) End Sub At this point, you should compile the project and make sure there are no errors. Next, you need to adjust the project properties to enable F5 debugging with Outlook. Right-click the project node in Solution Explorer, and select Properties. Click the Debug tab. Select Start external program and type the path to the Outlook.exe file, for example, C:\Program Files\Microsoft Office\Office12\outlook.exe. Change the Working directory value to be the path Outlook.exe resides in, for example, C:\Program Files\Microsoft Office\Office12\. Save the project and close the Properties tab. The last step is to register the add-in with Outlook. The add-in template provided with this article automatically generates a .reg file with the necessary settings to register the COM add-in created by the project with Outlook. Open Windows Explorer and navigate to the project directory. Double-click the addin_registry.reg file. Click Yes to merge the changes with the registry. Press F5 to compile your add-in and start Outlook with the debugger attached. Click an e-mail message you received from the Internet. You should see an adjoining form region with the transport headers, as shown in Figure 8. Click Copy to copy the contents of the transport headers to the Clipboard. Now that you have learned how to build an add-in that uses form regions to provide a solution with a custom UI in Outlook, you can deploy that add-in on other computers. The best means for deploying an add-in solution is to build a setup project and distribute an installer package. There are several important considerations you need to remember when building your setup project: You must install the Microsoft .NET Framework on the target computer before you install your add-in. You must install the Office primary interop assemblies on the target computer. These are automatically installed with Office if the .NET Framework was installed before Office 2007. If the .NET Framework is installed later, these components can be installed by repairing the Office 2007 installation. You must install the form region XML manifest on the computer's disk so that Outlook can find it. You must write the form region registry value that points to the XML manifest in the location it was installed on the computer. You must write the necessary registry values to register the add-in with Outlook. The setup project provided in Visual Studio 2005 makes this easier. Press CTRL+SHIFT+N to create a project. In the Project types list, expand Other Project Types, and select Setup and Deployment. Click Setup Project. Change the Solution drop-down list to Add to Solution. Type a display name for the project, for example, InternetHeaderAddinSetup. Click OK. After Visual Studio creates the setup project, it opens the File System view for the setup project. Then you can add the files you want to write to the file system, including: Form Region XML Manifest Add-in Primary Output In the file system on Target Machine tree view, select the Application folder. Right-click in the file list, point to Add, and then click Project Output. Click Primary Output, and make sure Project is set to InternetHeadersAddin, and then click OK. Several files are added to the file view. Exclude the following three files: Microsoft.Office.Interop.Outlook.dll Microsoft.Vbe.Interop.Forms.dll Office.dll To exclude each DLL, select DllName.dll in the file list and press F4 to display the properties, then change the Exclude property to True. Repeat this step to exclude each DLL file. Right-click in the file view, point to Add, and click Project Output. Click Content Output, and make sure Project is set to InternetHeadersAddin, and then click OK. Now that the project contains all of the necessary files, you need to add the appropriate registry entries to make Outlook discover the add-in and the form region. To edit which registry keys are configured when the installer is run, click View, point to Editor, and click Registry. This displays the Registry Editor for the setup project. Use the variable [TARGETDIR] to indicate the directory into which the add-in will be installed. This allows a relative path to be used for the XML manifest location. Right-click Registry on Target Machine and select Import. Browse to the project directory and select addin_registry.reg, then click OK. Expand HKEY_CURRENT_USER, expand Microsoft, expand Office, and then expand Outlook. Right-click the Outlook node, point to New, and click Key. Type FormRegions and press Enter. Right-click the FormRegions node, point to New, and click Key. Type IPM.Note and press Enter. Right-click the IPM.Note node, point to New, and click String Value. Change the value name to InternetHeaderAdjRegion. Press F4 to display the value properties page and change the Value property to [TARGETDIR]InternetHeaderAdjRegion.xml. Save the setup project. Finally, you need to make sure that the .NET Framework is installed before the add-in. Fortunately, Visual Studio makes this easy. Click Project, and then click Properties. Click Prerequisites. Verify that the .NET Framework 2.0 appears in the list of components. Click OK, and then click OK again. Now you can build the setup project and test the installation package. In addition to a multitude of other new features and functionality in the Outlook 2007 object model, form regions provide a powerful and simple way to customize the Outlook UI in ways that were previously not possible with Outlook. Form regions also enable you to use a rich and familiar development environment to write business logic and other code running behind forms instead of restricting you to Notepad and Microsoft Visual Basic Scripting Edition (VBScript). If you have a forms-based solution, you should consider migrating your solutions to Outlook 2007 to use form regions. Outlook 2007 Resource Center Building Outlook 2002 Add-ins with Visual Basic .NET Code Security Changes in Outlook 2007 What's New for Developers in Outlook 2007 (Part 1 of 2) What's New for Developers in Outlook 2007 (Part 2 of 2)
http://msdn.microsoft.com/en-us/library/bb226713.aspx
crawl-002
refinedweb
4,970
57.98
I’d like to share a useful practice that can help to improve the efficiency of your BlackBerry® WebWorks™ development: how to get around recompiling your BlackBerry WebWorks application after small edits are made. Web developers have the ability of being able to make quick changes to their applications, such as tweaks to verbiage, style or layout of content, without the need for recompiling the application to see the changes. The changes can be made directly on a development server, and previewed immediately in a browser. With the introduction of the BlackBerry WebWorks SDK, web developers may be faced with BlackBerry applications that require a more formal compilation and deployment process. Using the BlackBerry WebWorks SDK and a configuration file (XML), web resources (HTML, JavaScript, Image files) are embedded into a compiled BlackBerry application (COD file). This COD file is your BlackBerry WebWorks application that is then deployed to a BlackBerry® smartphone (or simulator). The authoring process looks like this: - Make content changes - Build BlackBerry WebWorks application - Deploy to BlackBerry smartphone (or simulator) - Preview & test the changes Wouldn’t it be great if, for the purposes of development, this process could be simplified to the following? - Make content changes - Preview & test the changes That sure would save a lot of time, while you are developing your content, wouldn’t it? Well, with the BlackBerry WebWorks SDK, you can do this – here’s how. When creating a BlackBerry WebWorks application, you tend to define a statement in your configuration file to load your web content as embedded resources. This means that each time a content change is made, you would need to recompile the application. Instead, you can deploy this content to an external location such as a shared network domain or even the device SD Card. Next, configure your widget to load the content from this location. Separating this remote content from the compiled BlackBerry COD file means that the widget does not need to be recompiled and re-deployed when small changes are made to this content. So how do you do it? Update the content element in your configuration file to load your primary index.html file from a different location. Change the content element in your config.xml file from <?xml version="1.0" encoding="UTF-8"?> <widget xmlns="" xmlns: <name>Test<?/name> <content src="index.html"/> </widget> to the following if your web resources are deployed to a remote location: <?xml version="1.0" encoding="UTF-8"?> <widget xmlns="" xmlns: <name>Test</name> <content src=""/> </widget> or the following if your web resources are deployed to a SD card: <?xml version="1.0" encoding="UTF-8"?> <widget xmlns="" xmlns: <name>Test</name> <content src=""/> </widget> Next, add the following access element to your config.xml file. This element will grant permission to your application to access resources from the specified URI. Without this declaration, any requests for resources from this URI would be denied. <?xml version="1.0" encoding="UTF-8"?> <widget xmlns="" xmlns: <name>Test</name> <content src=""/> <access uri="*" subdomains="true"/> </widget> When you build your application, you would normally include your web resources in the ZIP file that you provide to the BlackBerry WebWorks Packager. You don’t need to include these files in the ZIP file at this point as they are not being referenced as embedded resources and do not need to be compiled into the BlackBerry COD file. Instead, make sure these files are deployed to the location specified in the content element of your config.xml file. The BlackBerry application that is created can now be built and deployed. When launched it will retrieve the content from your index.html file from the location you specified in your config.xml file. While you are developing the content for your BlackBerry WebWorks application, you can make changes to the web resources without having to rebuild and/or redeploy your application. Before you go… Following this practice, it’s important to remember that your config.xml file needs to be updated whenever your widget content references elements from the BlackBerry JavaScript API collection. For instance, if you add content to your remote web resources that use elements from the blackberry.system namespace you would need to add a feature element to your config.xml file for this namespace. This change would require a recompile and redeployment of the BlackBerry WebWorks application for this any changes to be reflected. Finally, please keep in mind that this practice is only recommended for development of your content. In most cases, you would not want your web resources publicly exposed, as this would introduce a risk of unwanted content changes. When you are confident your content is complete, update your config.xml file to reference them as internal resources, then include these web resources in the ZIP file used to build your BlackBerry WebWorks application. I hope this practice helps reduce your development time. Enjoy and good luck!
http://devblog.blackberry.com/2009/12/update-your-blackberry-widget-without-recompiling/
CC-MAIN-2018-30
refinedweb
823
53.92
In this series of tutorials, we’ll go through setting up Unity’s built-in Nav Mesh, which enables pathfinding and navigation without the need for complex code. Before you Begin For Intermediate Unity Developers This tutorial series is for intermediate Unity users. I won’t go into detail on Unity basics, and assume you know your way around Unity’s interface and core features. Please refer to the Pong tutorial for beginners if you are not yet familiar enough with Unity to follow this tutorial. Unity Version This project was created with Unity version 2017.3.1f1, which is the latest version at the time of writing. You should be OK with any Unity version that is not too much older or newer. Download A download link is at the bottom of this tutorial. It contains a project as it should be if all the instructions in this part of the tutorial are followed. Navigation Basics Unity comes with a great navigation system that can be set up quickly and easily, and gives your characters the ability to navigate a complex environment by pathfinding and avoiding obstacles. Nav Mesh Unity’s navigation solution is the Nav Mesh. The Nav Mesh is a map of the areas in your game where a character can walk. When you tell your character to walk to a position on the Nav Mesh, they will automatically find a path to that position (if possible) and move there. If the character can’t reach the target position, they will get as close as possible. Nav Agent For a character to use a Nav Mesh, they must have a Nav Mesh Agent component on their GameObject. This agent contains the capabilities needed for navigation on the Nav Mesh, and you call methods on this object to make the character move, and use the various settings to specify the precise behaviour you want. Create a Basic Navigating Character To start with, we will create a simple scene with a navigating character who moves from their starting position to a target position. The Scene If you want to skip the basic non-AI related stuff, download the starter project, which includes a pre-built scene with the basics already done for you so you can get straight into the good stuff. The base project contains a single scene ‘MainScene’, with some ground, an Agent (who we will set up to navigate the world), and a target GameObject which we will use as the place the player will navigate towards. Create the Base Project and Scene If you would rather set up the base project yourself, here are the steps required (skip this part if you downloaded the base project, and simply open that project then continue to Add a Nav Mesh). - Start a new 3D Unity project. - Create a scene called ‘MainScene’. - Add a large Plane object to act as the ground - Add a small sphere or cube called ‘Agent’, and make sure it is above the ground. Place the Agent near one of the corners. - Create another GameObject called ‘Target’, and give it any 3D shape, such as a cylinder or a cube, and place it far from the Agent object (we will be making the Agent navigate towards the Target). - Add materials to the objects to separate the objects visually, and to make your scene more interesting. - Setup the camera so that you can see the whole of the floor and have a good view of the Agent and Target objects. Here is what my base scene looks like: Add a Nav Mesh A Nav Mesh is used differently to the typical Unity components. Instead of adding it to objects, you add objects to it. For this reason, there is a Navigation window that you must use to setup your Nav Mesh. If you don’t already have the Navigation window visible, go to the Unity menu and select Window > Navigation: This window has four tabs that present different options and customisations; these are: - Agents – customise the behaviour of AI characters (agents) using the mesh. - Areas – customise the different types of terrain (e.g. terrain that is slower to walk on or terrain that can’t be walked on). - Bake – this is where you apply all your settings and create the Nav Mesh. - Object – where you select which objects are included in your mesh, and some of their properties. You will also see that there is a Scene Filter option. This allows you to hide objects in the scene Hierarchy while working on navigation. For example, if you choose the Mesh Renderers option, everything without a Mesh Renderer will be hidden in the Hierarchy, making it easier to find the objects you want to use for navigation. This will come in handy for complex scenes with lots of objects, but for this tutorial we don’t need to worry ourselves about this. We won’t go into a lot of detail right now. For now, let’s just get something working. In the Navigation window: - Select the Object tab. - Select the ground object in the Hierarchy to make it the active object. - Check that the ground object is now selected in the Navigation window. - Set the settings as below: - Tick Navigation Static. - Select Walkable in the Navigation Area drop-down box. - Ignore Generate OffMesh Links for now (we’ll look at it later). You’ve now told the Nav Mesh that you want the ground to be walkable, and that it is ‘navigation static’ (i.e. the Nav Mesh will ‘see’ it when determining the walkable areas). This is the most basic Nav Mesh setup you need. Bake It Although we’ve added the ground to the Nav Mesh and made it walkable and navigation static, we have not actually created the Nav Mesh. We need to ‘bake’ it. Baking is a process of doing something that is too complex or time-consuming to do at runtime during the development process. There is no need to create a navigation mesh during runtime, since the terrain doesn’t change much in a game (and small changes don’t require a complete rebuilding of the mesh). The same is done for complex lighting in many games. - Select the Bake tab in the Navigation window. - Click the Bake button. - If you do not already have the Scene window open, open it to see your Nav Mesh. In the Scene window, the Nav Mesh is presented as a blue overlay on the ground, which represents where Nav Mesh Agents are able to walk: In our current project, it will just be a simple blue square, but this will change as we later add some complexity. Add an Agent A Nav Mesh is pointless without someone to walk around it. We will now turn the Agent object into a ‘Nav Mesh Agent’ who can navigate the Nav Mesh. - Select the Agent object in the Hierarchy to make it the active object. - Add a Nav Mesh Agent component to the object in the Inspector window. - Set the Speed property to any number (between 5 and 10 would be ideal). You’ll notice a lot of settings on the Nav Mesh Agent component. Some of these are quite obvious (e.g. Speed), and some are not quite so obvious. For now we’ll ignore these settings. Script We need some code to make the player navigate, but it’s probably not as much code as you think. The Nav Mesh Agent takes care of movement and navigation, and we only need to tell the agent where to walk to. To keep things simple for now, we’ll set a static target for the player to move towards (the Target GameObject in our scene). Now, create a new C# script called ‘Agent’. I won’t go into detail about the code, as most of it doesn’t directly have anything to do with navigation (and it’s pretty basic Unity code). The only line of code that is navigation specific is: agent.SetDestination(target.position); That line tells the agent where it should try to navigate to, and will trigger the agent to start moving if they are not already at the target. Here is the full code for the Agent script: using UnityEngine; using UnityEngine.AI; public class Agent : MonoBehaviour { [SerializeField] Transform target; NavMeshAgent agent; void Start() { // get a reference to the player's Nav Mesh Agent component agent = GetComponent<NavMeshAgent>(); // set the agent's destination agent.SetDestination(target.position); } } - Save the script. - Attach the script to the Agent GameObject. - Drag the Target GameObject into the Agent’s Target field in the Inspector: Yes, that’s all you need for now to get the player navigating towards the target! Run the Scene Run the scene and watch the player automatically move towards the target. You can try experimenting with some of the player’s settings, though some of them won’t have any effect on such a basic scene, as the navigation is going to always be in a straight line. It’s perhaps not too impressive to see the character move in a straight line from start to finish, so in the next part, we’ll add some obstacles and see the pathfinding in action. Download Here is the download for the project as it should be at the end of this part of the tutorial. Download it if you get stuck or want to compare your project to mine.
http://unity.grogansoft.com/tag/navigation/
CC-MAIN-2018-47
refinedweb
1,578
70.13
Up to this point, we have developed code that can read a model into memory and write it back out again, detecting a variety of errors in the model. The only point of writing the model out is to verify that the model has indeed been read correctly. This highway network model could be used for many purposes: The logic circuit model we have discussed could also serve many purposes: A neuron network model could also serve many purposes: Our goal in this course is to build simulations, and the time has come to discuss this in more detail. An old bumper sticker I picked up at a simulation conference said: "Simulationists do it continuously and discretely." The sticker was a joke because, while members of the general public reading the sticker might guess one subject (sex), the actual statement is entirely true when you read "it" as a reference to computer simulation. There are two broad categories of simulation: Continuous simulation models are common in fields as distinct as analog electonics, weather forecasting and macro economics. Here at the University of Iowa, the Hydraulics Institute is largely devoted to continuous simulation of fluid flow. Their building was built in the era when their research was largely done using actual tanks of water and even water from the Iowa River, but today, much of their work is done on computers, simulating not only water flow but also atmospheric flow. Discrete event simulations are common in fields as distinct as logistics and digital logic simulation. Almost all simulation models are based on simplifying assumptions. Most physics models assume that air is a vacuum and that the earth is flat. You can build bridges with these assumptions, although for medium and large bridges, it is worth checking how the bridge responds to the wind. (The Tacoma Narrows Bridge disaster of 1940 shows what can happen if you forget the wind when you design a large bridge -- it's in Wikipedia, watch the film clip.) Our distinction between continuous and discrete models is also oversimplified. There are mixed models, for example, where the set of differential equations that describe the behavior of a system changes at discrete events. At each such event, you need to do continuous simulations to predict the times of the next events. In a highway network, the events we are concerned with are: Of course, the model can vary considerably in complexity. A simple model might have a fixed travel time on each road segment, while a more complex simulation might model congestion by having the travel time get longer if the population of a road segment exceeds some threshold, where that threshold may itself depend on the unpopulated travel time. In a crude model, each car might make random navigation decisions at each intersection. A more complex model might have each car follow a fixed route through the road network, while a really complex model might include cars with adaptive navigation algorithms so that drivers can take alternate routes when congestion slows them on the path they originally planned. In a network of digital logic gates connected by wires, we might have the following events: The key element in the above that needs extra discussion is that if the output of a gate is changed and then changed back very quickly, no output change actually occurs. That is, there is a shortest pulse that the gate can generate on its output. In a neural network model, with neurons connected by syapses, we might have the following events: The key element in the above that needs extra discussion is how the voltage on a neuron changes with time. Between events, the voltage on a neuron decays exponentially, slowly leaking away unless it is pumped up by a synapse firing. So, for each neuron, we record: Now, if we want to know the voltage at a later time t', we use this formula: Of course, once you compute the voltage at the new time t', you record the new voltage and the new time so you can work forward from that the next time you need to do this computation. The constant k determines the decay rate of the neuron (it must be negative). In a simple model, all the neurons might have the same threshold and the same decay rate, and all synapses might have the same strength. More complex models allow these to be varied. In simpler models, the voltage on a neuron goes to zero when that neuron fires. In more complex models, the threshold has its own decay rate and the threshold goes up when the neuron fires. In complex models, the strength of a synapse weakens each time it fires because the chemical neurotransmitter is used up by firing. During the time before the next firing, the neurotransmitter can build up toward its resting level. This allows the neural network to get tired if it is too active. (You can actually see this effect at work in your visual pathways. Look at a bright light and then look away, and you will see a negative afterimage that fades as the synapses that were overexcited recharge.) The key to discrete-event simulation is a data structure called the pending-event set. This holds the set of all future events, events that have not yet been simulated, but that have been predicted to occur at future times as a result of events that have already been simulated. The simulator operates by picking events out of the pending event set in chronological order. Simulating any particular event can involve any combination of changing variables that represent the state of the simulation model, on the one hand, and scheduling future events by adding them to the pending event set. Some events may only change state variables without scheduling any future events. Other events may schedule future events without making any change to state variables. We can summarize the basic discrete-event simulation algorithm with the following pseudo-code: // Initialize PendingEventSet eventSet = new PendingEventSet() for each event e that we know in advance will happen { eventSet.add( e ); { while (eventSet not empty) { Event e = eventSet.getNext(); simulate e by { update simulation state variables to account for e at e.time for each event f that is a consequence of e { eventSet.add( f ); } if appropriate, force simulation to terminate } simulation terminates because we ran out of events } Note that the above code gives two different ways to terminate the simulation, one by running out of events, and the other involving a specific event, probably one that was scheduled as part of the initialization, that terminates the simulation. Either approach to termination works equally well. If we have a global constant endOfTime, we can make the event set become empty at that time by checking, as each event is scheduled, to see if it happens before or after the end of time. If it happens before, schedule it. If it happens after, simply discard the event notice. So what is the event set? It is a priority queue sorted on the time that events are scheduled to happen. The order in which events are supposed to happen has nothing to do with the times at which it can be accurately predicted that they will happen, so the order in which events are placed in the queue has nothing to do with the order in which they are extracted. Java provides several prioirty queue classes. Class PriorityQueue is based on a heap implementation. ConcurrentSkipListSet is another. We'll luse PriorityQueue here. Unfortunately, the different Java classes that can be used to implement priority queues aren't interchangable. One of the big differences between the Java alternatives that may concern you is whether the ordering is stable. Stable priority queues guarantee that two elements inserted with equal priority will be dequeued in the order in which they were enqueued. For well formulated simulation models, stable priority queues are not necessary because the order in which simultaneous events are simulated should not matter. In the real world, if two outcomes are possible from simultaneous events, it is highly likely that either outcome is correct. Stability may be useful to endure repeatability and easy debugging, but it may also be misleading in cases where the real world behavior is not repeatable because both outcomes are possible. There are many different ways of using discrete event simulation. We can describe these as simulation frameworks. They change the way we write the code to schedule events, but they do not change the underlying simulation model. Here is an initial (and poorly thought out) framework: /** Framework for discrete event simulation. */ public class Simulator { public static abstract class Event { public float time; // the time of this event abstract void trigger(); // the action to take } private static PriorityQueue <Event> eventSet = new PriorityQueue <Event> ( (Event e1, Event e2) -> Float.compare( e1.time, e2.time ) ); /** Call schedule(e) to make e happen at its time. */ static void schedule( Event e ) { eventSet.add( e ); } /** Call run() after scheduling some initial events * to run the simulation. */ static void run() { while (!eventSet.isEmpty()) { Event e = eventSet.remove(); e.trigger(); } } } The problem with the above framework is that it requires the user to create large numbers of subclasses of events, where each subclass includes a trigger method that does the required computation. Scheduling an event is actually an example of a delayed computation, and as we've seen, Java provides a tool that allows delayed computation and implicit creation of anonymous subclasses, the lambda expression. The above framework isn't set up to use these! The following simulation framework uses lambda expressions. We will use this framework as we develop our simulation: /** Framework for discrete event simulation. */ public class Simulator { public interface Action { // actions contain the specific code of each event void trigger( float time ); } private static class Event { public float time; // the time of this event public Action act; // what to do at that time } private static PriorityQueue <Event> eventSet = new PriorityQueue <Event> ( (Event e1, Event e2) -> Float.compare( e1.time, e2.time ) ); /** Call schedule to make act happen at time. * Users typically pass the action as a lambda expression: * <PRE> * Simulator.schedule( t, ( float time ) -> method( ... time ... ) ) * </PRE> */ static void schedule( float time, Action act ) { Event e = new Event(); e.time = time; e.act = act; eventSet.add( e ); } /** run the simulation. * Call run() after scheduling some initial events to run the simulation. */ static void run() { while (!eventSet.isEmpty()) { Event e = eventSet.remove(); e.act.trigger( e.time ); } } } When writing a simulation, it is important to begin by settling on a framework, because that determines the sturcture of the simulation code. Changing the framework after you have begun writing code can be messy, but it is not impossible.
http://homepage.divms.uiowa.edu/~jones/object/notes/14.shtml
CC-MAIN-2017-51
refinedweb
1,793
60.35
Chua Wen Ching's Blog Technologies, Communities, Career, Life VS 2008 SP1 - Error connecting to undo manager of source file I got this error: ASP.NET MVC Beta - IE 7 browser not refreshed to CSS/Javascript changes in VS 2008 SP1 I think this might be useful to developers. I suspect this as a bug, hopefully it can be resolve soon. I can't find anyone talking about this on the internet. Maybe someone already posted this in the Microsoft Forum.If not, you can refer it here. JavaScript - With or Without Enclosing Script tag This is really simple. But to be honest, I am still wondering why the later one will not work? Microsoft XSS Detect Code Analysis - Missing License or Expired Why would you get to see this error when you install Microsoft XSS Detect Code Analysis on top of Visual Studio 2008 in Windows Vista Service Pack 1 environment? TortoiseSVN - check-in/changed icons not rendered properly in Windows Vista SP 1 I realized this issue in Windows Vista only. It worked fine in Windows Server 2003 R2 machine. Check the image below :) As for me, I have to trust the right pane more than the left pane for accuracy. Apt. Running ASP.NET 2.0 on FileSystem instead of IIS without Visual Studio 2005/2008 installed How are you able to run your ASP.NET 2.0 web application on File System (Standalone web server without IIS) without Visual Studio 2005/2008 installed in your local machine. I will say this is particular useful for POC ASP.NET 2.0 web application to customers or having web graphics designer testing out the ASP.NET 2.0 web application without knowing anything about IIS or trying to run F5 on Visual Studio 2005/2008. How can you do that? I heard from lots of people that it wasn't possible to do this. There was no way you can extract out this FileSystem from Visual Studio 2005/2008. You can download the existing Cassini 1.0 and enhance it. You can also find some implementation online like this or download UltiDev Cassini (which I wasn't able to install it properly). This might not be common but I believe it can be useful to lots of people out there. Tip #1 You can run this filesystem without having Visual Studio 2005/2008 installed as long you have .NET Framework 2.0 and above installed. You can locate this file "WebDev.WebServer.EXE" at C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 Try it yourself Tip #2 Run this application in the Visual Studio 2005/2008 command prompt as below or you can try open a command prompt and change directory to C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 webdev.webserver.exe /path:"C:\MyPOC\TestWeb" /port:1337 /vpath:"/TestWeb" First you define a path of your ASP.NET 2.0 Web Application. I include quotes around the path. Set any ports you like, but do be careful of complication to existing ports used by other applications. I noticed when you debugged ASP.NET 2.0 FileSystem project in Visual Studio 2005/2008, it would use this port 1337 so I reused it. I will recommend to set a virtual path like above. It can be anything you like. Tip #3 webdev.webserver is not dependent to IIS. If you don't believe it, try to stop all your IIS web process. Tip #4 The ASP.NET Development Server will run at your taskbar like below. But don't even try to click on the Root URL. You will never able to get it to run. I almost gave up because of this as I thought there was some JIT error. Instead, run the URL directly on your web browser (IE or FireFox). It works :) Hope you find this useful. If you know there is a better way, let me know. Thanks. Updates: I realized some machines did not have the webdev file in the right place, instead it was stored in this location C:\Program Files\Common Files\Microsoft Shared\DevServer\9.0 Nevertheless, it works :) Good luck My coming initiatives for Green Computing/Orphans July 20, Senior Citizens July 26 I will be having 2 events that is worth to highlight in my blog: Date: Sunday, July 20, 2008 Organizers: MIND and Mesiniaga (co-organizer) Venue: FRIM (within the forest) Target audience: 60 orphans (age between 8 - 17), 60 working adults (registration will open soon), 10 (organizers, press) Press: Newspaper, Magazines and TV Channel (tentative). PC.COM is confirmed coming and covering. Entrance: It's free. MIND (my community) will be covering everything including lunch. The session is a platform to educate the kids/adults about Green Computing as well as building synergies between the orphans and IT working professionals. There will be indoor and outdoor activities. Thanks to Microsoft for sponsoring my community and making things happen. Current sponsors: Dell Malaysia, Microsoft Malaysia, Microsoft Singapore, Microsoft MVP SEA Team, Microsoft Corp - VSTS Team, ISA Technologies, eQuva, Jetbrains, O'Reilly, PeachPit, PC.COM, SSW Australia, Red-Gate. If any of you like to be our sponsors, drop me an email. I am not looking at cash funding, instead anything that you can sponsors to rock these kids or donate to the Orphanage Home. Thanks. The event promotion will be advertised soon. Stay tuned. Date: Saturday, July 26, 2008 Organizers: MIND Speaker Highlight: Yew Ban and Poh Sze Venue: Microsoft Malaysia (Auditorium) Target audience: minimum 30, maximum 100 senior citizens (age 40 - 70) Press: Unknown yet. Probably Mandarin Newspaper. Entrance: It's free. MIND's session is always free and making impact for the community in Malaysia This is the 1st session that MIND will be conducting in Mandarin (traditional chinese) as well as to senior citizen. The prerequisite is that these senior citizens know the fundamentals of computer. Yew Ban, Poh Sze and myself will be speaking in Mandarin. MIND members can drop by for this session but this session is really simple and easy for the senior citizens to understand, so not sure suitable for the elite folks. The agenda will be as below: Well I hope both events will run smoothly and it can be teaser to my blog readers of MIND coming initiatives. Thank you. C# 3.0 - Using Anonymous Types I had a friend who recently asked me on the ways using Anonymous Type in C# 3.0. Take note, this is just an example and to me it is like syntactic sugar. You can do the conventional way and it still works. If you have a ComboBox within .NET Windows Forms and you like to programmatically configure the ComboBox with a list of values (Value & Member), normally people will do this: 27 IList listOfAnimals = new ArrayList { 28 new Animal { Id = 10, Name = "Itik" }, 29 new Animal { Id = 1, Name = "Ayam" } 30 }; 31 32 33 comboBox1.DataSource = listOfAnimals; 34 comboBox1.ValueMember = "Id"; 35 comboBox1.DisplayMember = "Name"; 39 public class Animal 40 { 41 public int Id { get; set; } 42 public string Name { get; set; } 43 } To me, it is a bit redundant to create an Animal class just for the purpose of programmatically setting the values to IList. Take note, tt will not read from the database and I am using ArrayList for this example. You can use other types of course. What if you have ComboBoxes that cater for Country, State, Cities, Phone Numbers, etc? The other way that is most appealing will be: 33 public class UltimateGenericComboBoxClass 34 { 35 public int Value { get; set; } 36 public string Member { get; set; } 37 } And you use it across all your programmatically ComboBoxes. As for me, I will recommend to consider using Anonymous types as below: 22 IList listOfAnimals = new ArrayList { 23 new { Id = 10, Name = "Itik" }, 24 new { Id = 1, Name = "Ayam" } 25 }; 26 27 comboBox1.DataSource = listOfAnimals; 28 comboBox1.ValueMember = "Id"; 29 comboBox1.DisplayMember = "Name"; As the code above, within the ArrayList I will call new { Id = 10, Name = "Itik" }. I did not create any really redundant class for this. Let me know if you have a shorter way for this. Have fun. Japanese C# MVP session "Gaku Pro" Great work Yuto. Even though I don't understand Japanese, but I like to see synergies among other countries C# MVP to promote C# in non English version. You can download his "Gaku Pro" slides here Gaku Pro means Programming for Students.
http://weblogs.asp.net/wenching
CC-MAIN-2014-42
refinedweb
1,410
65.12
The OMXMLWRITE library can be used to convert a markup event stream to a well-formed XML instance. The library exports two functions, xml.writer and xml.written. Each function performs the same task through a different interface; which one is better to use depends on the context. The following example program parses well-formed XML input and uses the OMXMLWRITE library to write it out as XML again. import "omxmlwrite.xmd" prefixed by xml. process do xml-parse scan #main-input output xml.written from #content done This program can be used to normalize well-formed XML instances. All XML produced by the OMXMLWRITE library has the following properties: <and &are represented using the built-in entities <and &whereas for example).
http://developers.omnimark.com/docs/html/library/129.htm
CC-MAIN-2017-34
refinedweb
122
58.89
34429/how-to-get-the-text-from-a-website-using-selenium hey, you can use get.text() method to get the string from the particular section of the website. Here is the program that I used and it worked pretty well. System.setProperty("webdriver.chrome.driver","C:\\Users\\priyj_kumar\\Downloads\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); driver.get(""); String str = driver.findElement(By.xpath("//*[@id='mp-tfa']/p")).getText(); System.out.println(str); This way you can get the string from the website. Please help me to write script for ...READ MORE Hey Ashmita, to get the typed text ...READ MORE Essentially, driver.getTitle(); function can be used to ...READ MORE The task is to get the Isha, you can checkout this code ...READ MORE This worked for me. Check this out import ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/34429/how-to-get-the-text-from-a-website-using-selenium?show=34432
CC-MAIN-2020-24
refinedweb
142
63.66
react-collapse-pane This is intended to be the simple, reliable, configurable, and elegant solution to having splittable, draggable and collapsible panes in your React application. [click for storybook demo] [click for documentation site] Getting Started :rocket: Install react-collapse-pane: npm i react-collapse-pane # or for yarn yarn add react-collapse-pane Once installed you can import the SplitPane component in your code. import { SplitPane } from "react-collapse-pane"; If you're using Typescript the SplitPaneProps, as well as a few other helper types type is also available. import { SplitPane, SplitPaneProps, ResizerOptions, CollapseOptions, SplitPaneHooks } from "react-collapse-pane"; Quick Start Usage :fire: The only component you must interact with is SplitPane. This serves as a wrapper for all of the children you wish to lay out. All you're required to give is a split prop which can be either "horizontal" or "vertical". This identifies what the orientation of the split panel will be. <SplitPane split="vertical"> <div>This is the first div</div> <div>This is the second div</div> <div>This is the third div</div> This is the fourth but not a div! </SplitPane> What you just did is make a splittable panel layout! Congrats! That was easy! :grin: This basically splits the children vertically (i.e. full-height split). The children can be any valid React child. If a child is null it will be excluded from being split or displayed. By default there is a 1px divider with a grabbable surface of 1rem width or height (depending on the split). There is also no limit to the number of divs you have as children. The SplitPane will split them all accordingly. But what about collapsing the panels, styling the resizer, or RTL support? :sob: This library supports all of these things and more! For more details check out the documentation Documentation Documentation can be found at If you notice an issue then please make an issue or a PR! All docs are generated from the docs folder in the master branch. Contributing and PRs :sparkling_heart: If you would like to contribute please check out the contributor guide All contributions are welcome! All issues and feature requests are welcome! Credit and Attribution :pray: This project did not start off from scratch. The foundation of the project was the excellently written react-multi-split-pane library which is itself a typescript rewrite of the react-split-pane library. Much gratitude to their authors, @NeoRaider and @tomkp Contributors ✨ Thanks goes to these wonderful people (emoji key): This project follows the all-contributors specification. Contributions of any kind welcome!
https://opensourcelibs.com/lib/react-collapse-pane
CC-MAIN-2021-31
refinedweb
432
55.74
It would also help just to get the error messages again, currently we would like to be early adopters and give it a try but i dont know how to enable the error messages (same as OP) Posts made by rund1me - RE: How to UPGRADE from Quasar 0.17 to 1.0 beta - RE: Select unusable with a lot of data. Aehm sorry guys just wanted to help you where i thought might be an issue in regards to performance, i dont actually use the select like that. i was just suspecting that there might be an issue when the table acutally renders pretty fast (also when using pagination all) and the select hangs. I just used this data size so that you might see that one could maybe increase the performance. Event with a few elements in the select a few ms here and there can always make a difference. - RE: Q-table Slow conditional rendering. - RE: Q-table Slow conditional rendering! - Select unusable with a lot of data Hi There, During my performance testing with lots of data i found the following when using selects with a lot of data they are incredible slow to the point where they actually crash to browser. --> select with 30k entries What got me wondering was that the datatable even with a filter and more columns can still be used with this much data (not using conditional rendering). -> datatable with 30k entries. Again, if i can help to trace some problems just tell me! Cheers! - RE: Q-table Slow conditional rendering. - RE: Q-table Slow conditional rendering just wanted to ask how i can help with the issue - RE: Q-table Slow conditional rendering - Q-table Slow conditional rendering Hi! - RE: Quasar Table Jump to page i have done the following now: I have overwritten the default sort (actually with the same code as the default table sort) and added the __sortedIndex: tableSort (data, sortBy, descending) { const col = this.columns.find(def => def.name === sortBy) if (col === null || col.field === void 0) { let sortedStuff = data.slice().map((row, i) => { row.__sortedIndex = i return row }) return sortedStuff } const dir = descending ? -1 : 1, val = typeof col.field === 'function' ? v => col.field(v) : v => v[col.field] let sorted = data.sort((a, b) => { let A = val(a), B = val(b) if (A === null || A === void 0) { return -1 * dir } if (B === null || B === void 0) { return 1 * dir } if (col.sort) { return col.sort(A, B) * dir } if (isNumber(A) && isNumber(B)) { return (A - B) * dir } if (isDate(A) && isDate(B)) { return sortDate(A, B) * dir } if (typeof A === 'boolean' && typeof B === 'boolean') { return (a - b) * dir } [A, B] = [A, B].map(s => (s + '').toLowerCase()) return A < B ? -1 * dir : A === B ? 0 : dir }) let sortedStuff = sorted.slice().map((row, i) => { row.__sortedIndex = i return row }) return sortedStuff } It would be nice to have a callback after sorting is done so we can add the __sortedIndex in this case so its not necessary to re-code the default sorting. also i am watching the selected rows change, and the pagination object change and select the page in the pagination object. this is done by using the default __index if not sorted and if available the __sortedIndex: selectedRows (what) { if (what.length > 0) { let idx = what[0].__index + 1 if (what[0].__sortedIndex) { idx = what[0].__sortedIndex + 1 } let newPage = idx / this.paginationControl.rowsPerPage this.paginationControl.page = Math.ceil(newPage) } else { this.paginationControl.page = 1 } } It somehow works even with filtering, what i dont understand. do you have any better solution to this or do you see any problems here? Thanks for the help and Cheers! - RE: Quasar Table Jump to page i try to make it as general as possible so you might be able to integrate it with slight modifications. - RE: q-table selection shows y-scrollbar Thanks that worked! - q-table selection shows y-scrollbar Hi,? - RE: Quasar Table Jump to page Thanks for the answer, I tried that already and it works, But then i have to keept track in which page the item is at all times (after sorting, filtering etc…). - Quasar Table Jump to page Hi, In my current component i integrated a map, when i click a marker on the map i want to select a single row. Therefore i used the :selected.sync with an array which i always overwrite. This works fine but now if i have a problem. I cant find a functionality where i can jump to the page of the current selected item the proble is that i use sorting and filtering too. In the callback of the click i have the object of the (including __index) available. Watching the pagination control and doint my own resorting etc… seems like a dirty hack. Is there anywhere something available where i can just do something like this: pagination.JumpToIndex(__index) ? Any suggestions? Thanks a lot
https://forum.quasar-framework.org/user/rund1me/posts
CC-MAIN-2020-24
refinedweb
820
65.73
Does anyone have any suggestions as to how I could better this code.(Not too fancy). I'm supposed to use a switch statement. I think the concept isn't bad. But some things are off as far as my opitions displaying correcly. #include <iostream> using namespace std; int main () { char Direction; int room; cout << "*****************************************\n" << "* A Simple Text Based C++ Adventure Game*\n" << "*****************************************\n\n\n" << "At any given time you, the player can move north,\n" << "sout, east, and west by entering N, S, E, or W respectively.\n" << "When you enter a room the program will output a room description\n" << "and tell you what directions have doors available. If you select\n" << "a direction then you will move in that direction to a room adjacent\n" << "to where you are located. If you enter a direction where there is no\n" << "door you will be informed that the move is invalid.\n\n\n" << "You, the player will begin in room 1 and win the game when you reach room 7.\n\n\n"; cout << "You are now in Room 1.\n" << "Room 1 has a north door to room 2 and an east door to room 4.\n"; room = 1; cout << "Enter a direction (N or E): "; cin >> Direction; switch (Direction) { case 'n': case 'N': if(room = 1) { cout << "You are now in Room 2.\n" << "Room 2 has an east door to room 3 and a south door to room 1\n" << "Enter a direction (E or S) "; cin >> Direction; room = 2; } if(room = 4) { cout << "You are now in Room 3.\n" << "Room 3 has an east door to room 6, a west door to room 2,\n" << "a north door to room 5, and a south door to room 4.\n" << "Enter a direction (E, W, N, or S) "; cin >> Direction; room = 3; } break; case 's': case 'S': if (room = 6) { cout << "You are now in room 7.\n" << "YOU WIN THE GAME!!!!!\n"; room = 7; } break; case 'e': case 'E': if(room = 1) { cout << "You are now in Room 4.\n" << "Room 4 has a north door to room 3 and a west door to room 1\n" << "Enter a direction (N or W) "; cin >> Direction; room = 4; } if ( room = 2) { room = 3; } if (room = 3) { room = 6; cout << " You are now in room 6.\n" << "Room 6 has a south door to room 7\n" << "and a west door to room 3.\n" << "Enter a direction (W or S) "; cin >> Direction; } break; case 'w': case 'W': if(room = 6) { room = 3; } break; default: cout << "unknown room\n\n"; } }
https://www.daniweb.com/programming/software-development/threads/41591/c
CC-MAIN-2017-13
refinedweb
438
86.03
On Fri, 16 Aug 2002, Tim Robbins wrote: > On Thu, Aug 15, 2002 at 03:18:59PM -0700, Alfred Perlstein wrote: > > > /usr/obj/vol/share/src/i386/usr/include/stdbool.h:41: warning: useless keyword o > > r type name in empty declaration > > /usr/obj/vol/share/src/i386/usr/include/stdbool.h:41: warning: empty declaration > > > I get those a lot now... please fix. > > This happens because GCC 3 doesn't define __STDC_VERSION__ unless > you specify -std=<something>, Also because ru just removed the special gcc hacks for errors in standard headers, so the broken <stdbool.h> is now detected. gcc 3 doesn't define __STDC_VERSION__ for -std=c89 either. __STDC_VERSION__ is a c99 thing, so this seems right. - whereas 2.95 defines it to 199409L if no > -std option is given. I didn't know that. gcc 2.29 has the interesting bugs of not putting either __STDC__ or __STDC_VERSION__ in gcc -E -dM output, so they are hard to see. - I'm not quite sure how to check for this. Perhaps this > would work: > > #if __STDC_VERSION__ < 199901L && __GNUC__ < 3 > typedef int _Bool; > #endif Seems best. > GCC 3 appears to declare _Bool regardless of any -std option. Even gcc -traditional declares it. This may be a bug. _Bool is in the implementation namespace, but still causes problems (a bit like declaring __STDC__ for nonstandard compilers). We have a hack in <sys/cdefs.h> related to this. __func__ is a C99 thing, so it should be ifdefed using __STDC_VERSION__, but it is also a gcc thing, so the correct ifdef using __STDC_VERSION__ doesn't work. We handle __restrict/restrict a little differently (probably better). Bruce To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message
https://www.mail-archive.com/freebsd-current@freebsd.org/msg41949.html
CC-MAIN-2018-51
refinedweb
292
59.09
On Errors in Repeated Functions Recently I found myself parsing several similar XML files in Python. The XML had a deeply nested structure I wanted to get stuff out of, which means using Python’s xml.etree.ElementTree’s find and findall methods: for entry in root.find('this').find('this2').findall('entry'): first_thing = entry.text for next_entry in entry.find('this3').findall('next_entry'): thing_i_want = next_entry.find('thing_i_want').text other_thing = next_entry.find('junk').find('other_thing').text With this bit of code, I want to grab a list of the text in a XML tree. It’s readable, succinct, and does no error handling. Each of those find calls can return None if their element isn’t found, and because I want to parse multiple files I can’t have that. Fortunately, for the most part, I want to handle the errors in only two ways: assign None to the specific thing or quit the whole parse and try again with the next one (whether that’s at the file level or at the XML branch level). The other problem is how to repeat this error handling non-awkwardly. One way to do handle the errors is with a try/except block: for entry in root.find('this').find('this2').findall('entry'): try: first_thing = entry.text for next_entry in entry.find('this3').findall('next_entry'): thing_i_want = next_entry.find('thing_i_want').text other_thing = next_entry.find('junk').find('other_thing').text except AttributeError: first_thing = None thing_i_want = None other_thing = None # or `continue` if the rest of the info isn't worth processing on error The problem with this construct in this situation is that it’s not granular enough. I’d like to get the first_thing if possible and assign None to the others if I can’t get them. I could you multiple try/except blocks, but it quickly becomes very unreadable. Another way of doing it is to test everything before we find it: this = root.find('this') if this is not None: this2 = this.find('this2') if this2 is not None: # iteration over None just skips the loop, so no need to check here for entry in this2.findall('entry') ... The problems with this is obvious. Look at that nesting! I’d be halfway across the screen before I got anything done! However, this pattern can be abstracted into a function. def find_or_none(node, taglist): for tag in taglist: node = node.find(tag) if node is None: return node return node This compressed error handling could also be implemented using exceptions. With this punted error handling, the code becomes: this2 = find_or_none(root, ['this', 'this2']) if this2 is not None: for entry in this2.findall('entry') first_thing = entry.text next_entries = entry.find('this3') if next_entries is not None: for ... This works, but there’s one way I think it can be improved. Instead of checking if something is not None, check if it is None and break, continue, or return the heck away from the error. With this change, I get fairly clean error handling: this2 = find_or_none(root, 'this', 'this2') if this2 is None: return None for entry in this2.findall('entry'): first_thing = entry.text this3 = entry.find('this3') if this3 is None: continue # or `break` or `return` if that's appropriate for next_entry in this3.findall('next_entry'): thing_i_want = next_entry.find('thing_i_want') if thing_i_want is not None: thing_i_want = thing_i_want.text other_thing = find_or_none(next_entry, ['junk', 'other_thing']) if other_thing is not None: other_thing = other_thing.text It’s not pretty, but it works. Other techniques in other languages for this kind of thing are C#’s null-conditional operator and Functional Programming’s monadic error handling. There’s also an excellent video including a functional approach to this this (and other) problems. Rust also has a monadic approach to error handling. After I read/watch this stuff again, I’ll probably be ashamed of this post, but until then, it’s up :) Update: I still find this useful for some things, but XML handling is best handled by XSLT! Use XSLT!
https://www.bbkane.com/2016/12/16/On-Errors-in-Repeated-Functions.html
CC-MAIN-2021-04
refinedweb
663
67.65
One day or other you might get a chance to integrate GS1 systems. That day shouldn’t be challenging for you 🙂 . This Blog describes as how to exchange/create GS1 understandable messages which will let PI [7.1] to post the GS1 understandable documents and the sample use case for developing the same. It is much painful at the initial stage to find the correct way of integrating GS1 systems via SAP PI and there is no proper SDN blog or discussions available for the same. Hence I have shared this information based on my project learning. It doesn’t contain any step by step for deployment or development and It describes the how to frame/create/Build the Main GS1 Document XML schema using the available GS1 standard sub schema. It is for Intermediate PI Consultants. Hints /Prerequisites: - Here I have explained based on GS1 XML Business Message Standards (BMS) V 2.8. - download the GS1 XML Business Message Standards (BMS) and Schemas, Version 2.8 GDSN Price Synchronisation - The base business document used for this KB is GDSN Price Synchronisation. You can download all the standard sub-schemas using the given link. - Get the sample document (here Price) output xml from the GS1 team - Any of the The XML editing tool – Here I have used Liquid XML Studio . - You can use any third party XML editing tool like Altova XML Spy or Styus studio etc., - Developed and compiled java mapping code using J2SE1.5 compiler since PI 7.1 using the older Java compiler. Overview of. - GS1 Member Organizations handle all enquiries related to GS1 products, solutions and services. - GS1 has close to 40 years’ experience in global standards – see our timeline for more information. - GS1 offers a range of standards, services and solutions to fundamentally improve efficiency and visibility of supply and demand chains. - GS1 standards are used in multiple sectors and industries. GS1 XML Business Message Standards (BMS) GS1 Main XML message consisting of multiple segments/Layers like - Transport - Service - Business document GS1 XML Business Message Architecture Basically we need to generate the xml in the above consolidated segments which will let us to post GS1 documents successfully. Sample Use case Our requirement is to post the Pricing (Add/Update) document to GS1 from ECC via PI. This is designed as simple Asynchronous Proxy to File scenario. Initial Steps In order to build/frame the GS1 – XML Business Message Architecture based XML output we have to follow the following steps Step 1: Here I have used V2.8 of GS1- BMS - download the GS1 XML Business Message Standards (BMS) and Schemas, Version 2.8 GDSN Price Synchronisation Once you have downloaded the schemas , unzip it and keep in your local desktop. The folder structure would be as follows For e.g., PriceSynchronisationDocument schema is under the folder path as ean.ucc\gdsn Step 2: We may not directly use / import in XI to use as external definition which we have downloaded in earlier step. We need to adjust\Edit the external definition as follows Once you have extracted relevant schemas (.xsd) , please open the xsd and check check any relative paths specified as../../. in the xsd:import – schemaLocation. Remove the relative paths and save. Note : You need to check all the schemas before you upload to XI System. Make sure that it has the proper folder path. Sample : <xsd:import namespace=”urn:ean.ucc:2″ schemaLocation=”../../ean.ucc/common/Document.xsd”/> After correction it would be looking like as follows Note : Simple xsd import statement ../../ string replacing it’s not sufficient , once we have imported the provided standard schemas, we need to find out the issues in imported external definition ,manually edit and re-import the error-schema and dependent schema . It is applicable wherever you find issue in schema (red lines in External definition ). Steps to be Performed in ESR Importing external Definitions Once you have removed all the relative paths in standard GS1 Schemas, upload all the XSD using import external definition option from ESR. In PI 7.1 we have the option to do the mass upload of external definition - Go to Tools –> Import external definition from ESR Steps to be performed in XML editing tool Step 1 : Open the sample output xml whihc you have received from the GS1 team for the relevant Business Document Sample would be as follows It has all the GS1 BMS components as mentioned above. Step 2 : Open the sample GS1 xml using XML tool Once you import this xml, it will try to load all the corresponding linked schema for validation Once it got successfully loaded , please click on the infer XSD Schema It will pop up and ask you as where to store the generated schema Select the required folder path and click on finish. Basically it will generate 4 schemas using the sample xml file as follows If you open the First Schema in XML tool, in this example it is GS_Example_ADD0 You can see the schema layout as follows Now we need to adjust the schema location to actual file path . Open GS_Example_ADD1 and change the import schema location as follows <xs:import schemaLocation=”ean.ucc/common/DocumentCommand.xsd” /> <xs:import schemaLocation=”ean.ucc/common/Message.xsd” /> Open GS_Example_ADD2 and change the import schema location as follows <xs:import schemaLocation=”schemapath/GS_Example_ADD1″ namespace=”urn:ean.ucc:2″ /> <xs:import schemaLocation=”ean.ucc/gdsn/PriceSynchronisationDocument.xsd” namespace=”urn:ean.ucc:gdsn:2″ /> Open GS_Example_ADD3 and change the import schema location as follows <xs:import schemaLocation=” schemapath /GS_Example_ADD2.xsd” /> Create a folder and zip all the above files. We will call this as “Architecture Schema”. We have to import this to ESR once you have imported all the standard schemas which you have downloaded in earlier step. Note : schemapath – this is the folder name in which you have zipped the schemas and upload to XI. E.g, In below example Source filed value ean.ucc/gdsn is theschemapath This Covers the following - Use cases/Building Blocks of GS1 XML Messages - Overview of GS1 XML Business Message Standards (BMS) - Sample use case - How to build the GS1 Main document schema. In next blog we are going to see how to use this in ESR to generate the expected GS1 Business Document XML. Thank you for this blog! The usual approach for building the complete document was to add the StandardBusinessDocument and the message type together, which can only be accomplished by editing the xsd. Your approach considerably shortens the time needed to implement such a scenario. Good job! Regards, Mark Thanks Mark 🙂 Regards., V.Rangarajan Hello Ranga, first of all thank you for this interesting blog that casts light on the GS1/SAP integration topic, that, as far as I know, is still a misty argument. I had the opportunity to work on this in the past months and i followed an apporach very similar to your (import via xsd editing + proxy to file + java mapping). For these reasons, I’d like to point out a couple of points of attention about what you reported: Regards. Antonio Hi Antonio, Thanks for your comments and inputs. With respect to , Point 1.GS1 Business document type dependency: Yes . It requires lot of manual edit in schema to fit the target GS1 requirement and effort to adjust ABAP Server proxy structure. It it ends up with huge mapping effort. Point 2.Problems in external definition import: Yes . Simple xsd import statement ../../ string replacing it’s not enough . Once we have imported the provided standard schemas, we need to find out the issues in imported external definition and manually edit , re-import the error-schema and dependent schema. Thanks for your point. I have updated the blog. Regards., V.Rangarajan
https://blogs.sap.com/2014/06/11/gs1-integration-sap-pi-71-part-i/
CC-MAIN-2017-22
refinedweb
1,290
55.13
Subscribe is Back With A Trillion Messages . The announcement blog post is on the Azure blog, and you can find out more about pricing on the Service Bus pricing page on Azure.com, in the "premium" column. You can create new premium messaging namespaces in the new Azure portal right now and give the public preview a try (click "Create" in the menu bar across the bottom of the portal page). And if you want to learn more about Azure Service Bus Messaging, there's a new 40 minute episode here on Subscribe that goes into more detail on Messaging in general and Premium Messaging in particular. Is this new feature hidden or disguised in the preview portal like so many other things? I see 'Service bus namespaces' under browse all but it only offers up a link to the real portal. Use the link in the write-up above (points to ). We'll take a look at why you can't see it. When I clicked the above link and signed in I got 'Oops, cannot create service bus' Or some similar error which it won't let me copy and paste so you know exactly what it said. Grrrr. @MattYetAnotherUserName: I'm very sorry about that. We've identified a portal glitch and are working on it. Try going via New > Hybrid Integration > Service Bus So it is hidden, not sure what 'Hybrid Integration' means but would have never dreamed to look there. I see the 'Oops' there as well. @Mike Schellenberger:Thank you for your patience, Mike. This happens to a few customers and our portal folks are working on figuring it out. Based on the information it seems as through this is for queues and topics only. Does it also include relays? If so, are the benefits the same? We mainly use relays as our application requires synchronous communication, which the relay facilitates. @compupc1: Premium Messaging is brokered messaging only, and does not include the Relay. The current version of the Relay would not benefit as much from the resource isolation that we provide here, so the Relay remains available in Standard as you know it. We're in early planning for a significant upgrade of the Relay, and that new version might indeed have a Premium tier. I can't talk about particular capabilities as of yet. Privileged to read this informative blog on Azure. Commendable efforts to put on research the data. Please enlighten us with regular updates on Azure. Friends if you're keen to learn more about azure you can watch this amazing tutorial on the same.
https://channel9.msdn.com/Blogs/Subscribe/Introducing-Azure-Service-Bus-Premium-Messaging
CC-MAIN-2019-26
refinedweb
435
73.07
render;31 32 import java.util.HashMap ;33 import java.util.Map ;34 35 /**36 * A registry of <code>Service</code> objects that may be recalled based37 * on <code>Id</code> values.38 */39 public class ServiceRegistry {40 41 /** Maps service Ids to services */42 private final Map serviceMap = new HashMap ();43 44 /**45 * Creates a new <code>ServiceRegistry</code>.46 */47 public ServiceRegistry() {48 super();49 }50 51 /** 52 * Adds a service to the registry.53 *54 * @param service The service to be added.55 */56 public synchronized void add(Service service) {57 if (serviceMap.containsKey(service.getId()) && serviceMap.get(service.getId()) != service) {58 throw new IllegalArgumentException ("Identifier already in use by another service.");59 }60 serviceMap.put(service.getId(), service);61 }62 63 /**64 * Returns the service with the specified <code>Id</code>.65 *66 * @param id The <code>Id</code> of the service to be retrieved.67 * @return The service which is identified by <code>id</code>.68 */69 public Service get(String id) {70 return (Service) serviceMap.get(id);71 }72 73 /** 74 * Removes a service from the registry.75 *76 * @param service The service to be removed.77 */78 public synchronized void remove(Service service) {79 serviceMap.remove(service.getId());80 }81 }82 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/nextapp/echo2/webrender/ServiceRegistry.java.htm
CC-MAIN-2017-04
refinedweb
223
61.63
nwrevoke - Revoke a Trustee Right from a directory nwrevoke [ -h ] [ -S server ] [ -U user name ] [ -P password | -n ] [ -C ] [ -o object name ] [ -t type ] [ -r rights ] file/directory nwrevoke revokes the specified bindery object with the corresponding trustee rights from the directory. nwrevokerevoke added as trustee. -t object type The type of the object. Object type must be specified as a decimal value. Common values are 1 for user objects, 2 for group objects and 3 for print queues. Other values are allowed, but are usually used for specialized applications. If object type is not specified, object name is taken as NDS name. file/directory You must specify the file/directory from which to remove the object as trustee. If you specified -S, it must be fully qualified NetWare notation for DOS namespace. Otherwise it must be file or directory mounted to your system using ncpfs. Example: nwrevoke -S NWSERVER -o linus -t 1 ’src:bsd_src’ With this example, user linus is removed as trustee from the bsd_src directory on the src volume on server NWSERVER. nwrevoke -o linus -t 1 /home/vana/ncpfs/nwserver/src/bsd_src With this example, user linus is removed as trustee from the bsd_src directory. nwrevoke was written by Volker Lendecke with the corresponding NetWare utility in mind. See the Changes file of ncpfs for other contributors.
http://huge-man-linux.net/man8/nwrevoke.html
CC-MAIN-2017-26
refinedweb
222
56.05
22 June 2009 05:12 [Source: ICIS news] SINGAPORE (ICIS news)--Kuwait Petroleum Corp (KPC) has sold 50,000 tonnes of full range naphtha for first half July lifting, trading sources said on Monday. The cargo was sold to Itochu at a premium of $15-16/tonne (€10.8-11.52/tonne) to ?xml:namespace> The KPC had sold a total 155,000 tonnes in May at premiums ranging between $9-17.50/tonne to Sources said the large volume of May and June cargoes was due to the delayed start-up of its new aromatics plant at Shuaiba. The Kuwaiti refiner seldom offers supplies on a spot basis as most of its naphtha is sold to term customers. The new aromatics plant in Shuaiba can produce 395,000 tonnes/year of benzene and 820,000 tonnes/year of paraxylene (PX). Souces said the units would be delayed to the third quarter of the year from the previously targeted June 2009. ($1 = €0.72) To discuss issues facing the chemical industry go to ICIS connect For more information
http://www.icis.com/Articles/2009/06/22/9226411/kpc-sells-50000-tonnes-of-naphtha-in-first-half-july.html
CC-MAIN-2015-06
refinedweb
178
68.3
Hi > -----Original Message----- > From: > [mailto:] On Behalf Of Sorin Ristache > Sent: Saturday, 18 February 2006 12:44 AM > To: > Subject: Re: [oXygen-user] Still having problems with XSD catalogs > > > Hello John, > > Looking at the catalog.xml file I see that all the elements are in no > namespace. In an OASIS XML Catalog they must be in the > "urn:oasis:names:tc:entity:xmlns:xml:catalog" namespace. That > means you > have to place all the elements of your catalog in that namespace, for > example by adding the > xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog" attribute to the > root element. Also you have to bind the xsi prefix to the XML Schema > namespace, otherwise it is not a well-formed XML document. > > Please look at the file > [oXygen-install-folder]/frameworks/docbook/dtd/catalog.xml for an > example catalog. > > > Best regards, > Sorin > > <oXygen/> XML Editor, Schema Editor and XSLT Editor/Debugger > > > > George Cristian Bina wrote: > > Hi John, > > > > Oxygen uses system entries to resolve schema locations as > we use the > > parser at SAX level and there is no way to distinguish > between external > > entities and schema locations. There was a discussion about > this on this > > list some time ago if you want to know more details. The > thread starts > > here: > > > > 000631.html > > and continues with > > > > 0633.html > > > > The solution is to add also system entries to your catalog. > > > > Best Regards, > > George > > > --------------------------------------------------------------------- > > George Cristian Bina > > <oXygen/> XML Editor, Schema Editor and XSLT Editor/Debugger > > > > > > > > wrote: > > > >> Hi All, > >> > >> I have been trying to use an XSD catalog file for months > without any > >> luck. > >> As a matter of fact I have *never* been able to get XSD > catalog files > >> working > >> on Oxygen. Currently I am using Oxygen 7.0 on a Windows > 2000 platform. > >> > >> The best I can do is the following error: > >> > >> > >> > >> The catalog preferences look like this: > >> > >> > >> > >> The catalog file looks like this: > >> > >> > >> > >> I am using the following catalog.xsd: > >> > >> > >> > >> I don't understand the error. The path to the catalog.xml file is > >> correct > >> and it is readable. > >> Why is this error occurring? I can only find catalog > files for DTDs not > >> XSDs. > >> > >> Thanks. > >> > >> > >> John > _______________________________________________ > oXygen-user mailing list > > >
https://www.oxygenxml.com/pipermail/oxygen-user/2006-February/000751.html
CC-MAIN-2018-34
refinedweb
359
57.16
Boolean Linnea Palin Greenhorn Joined: Oct 02, 2003 Posts: 12 posted Oct 07, 2003 07:32:00 0 Okay, here's the deal. I searched FAQ and all other links i found but none contains anything about booleans, which my programming teacher told me to look into a bit.. Would anyone please tell me what a boolean is and what it's used for? Thanks in advance.. /Linnea Sandeep Achar Greenhorn Joined: Sep 26, 2003 Posts: 18 posted Oct 07, 2003 07:52:00 0 Hi Linnea, boolean is a primitive datatype of Java that can have one of only 2 possible values: 1. true 2. false Other than these, it is not legal to assign any other value to a boolean variable. The following are examples of some legal assignments: boolean x = true; boolean y = false; y = x; // valid since x is of type boolean. The following are some examples of illegal assignments: boolean x = 1; // Cannot assign anthing other than "true" or "false" boolean y = 0; // same reason as above Hope this suffices. Bye. Regards,<br />Sandeep<br />(SCJP 1.4, SCBCD 1.3) Herb Schildt Author Ranch Hand Joined: Oct 01, 2003 Posts: 239 5 posted Oct 07, 2003 07:59:00 0 The boolean type represents true/false values. Java defines the values true and false using the reserved words true and false. Thus, the value of a variable or expression of type boolean will be one of these two values. The conditional expression in the if statement and the various loops are controlled by boolean expressions. Here is a program that demonstrates some aspects of the boolean type: // Demonstrate boolean values. class BoolDemo { public static void main(String args[]) { boolean b; b = false; System.out.println("b is " + b); b = true; System.out.println("b is " + b); // a boolean value controls the if statement if(b) System.out.println("This is executed."); b = false; if(b) System.out.println("This is not executed."); // outcome of a relational operator is a boolean value System.out.println("10 > 9 is " + (10 > 9)); } } The output generated by this program is shown here: b is false b is true This is executed. 10 > 9 is true (although it is not wrong to do so): if(b == true) ... Third, the outcome of a relational operator, such as >, is a boolean value. This is why the expression 10 > 9 displays the value "true." Further, the extra set of parentheses around 10 > 9 is necessary because the + operator has a higher precedence than the > [ October 07, 2003: Message edited by: Herb Schildt ] For my latest books on Java, including my Java Programming Cookbook , see HerbSchildt.com Linnea Palin Greenhorn Joined: Oct 02, 2003 Posts: 12 posted Oct 07, 2003 12:51:00 0 This took some reading, but thanks a lot for your answers and I'll test Herb's code and change and try some stuff with it.. Now I think I've got a good picture of what it's about: time for testing, changing, trying, testing, changing, trying and it all over again Thanks again. Linnea Palin Greenhorn Joined: Oct 02, 2003 Posts: 12 posted Oct 07, 2003 13:17:00 0 I've posted a thread that bears the topic "Help - boolean" aswell. Please read it through if you haven't. I'm doing an assignement for my programming lessons, so therefore, please do not give me the answers, but give me help to get there Fair enough? Okay, this is how far I've come after playing around with codes.. public class BooleanDemo { public static void main (String args[] ) { int i; int x; int a; boolean b; b = true; a = i%x; for (i=100; i<201; i++) for (x=2; x<i; x++) if (b = a>0) System.out.print(""); else System.out.println(+i + (i%x)); } } The thing is the errors I get are these: BooleanDemo.java [11:1] variable i might not have been initialized a = i%x; ^ BooleanDemo.java [11:1] variable x might not have been initialized a = i%x; ^ 2 errors Errors compiling main. why won't it work, what's the thing i missed about booleans? or is there something wrong with the int's i've made? Thank you in advance.. Linnea Palin Greenhorn Joined: Oct 02, 2003 Posts: 12 posted Oct 07, 2003 13:22:00 0 Another test I made: public class BooleanDemo { public static void main (String args[] ) { int i; int x; boolean b; b = true; int a == (i%x); for (i=100; i<201; i++) for (x=2; x<i; x++) if (b = a>0) System.out.print(""); else System.out.println(+i + (i%x)); } } gives only one error. One the line int a == (i%x); If you respond, please make sure i understand what version of code you meant Jeff Bosch Ranch Hand Joined: Jul 30, 2003 Posts: 805 posted Oct 07, 2003 17:02:00 0 Hi, Linnea - I'm addressing your last message, which appears incomplete. So, if I misunderstand your question, please elaborate. int a == (i%x); In this line, you're trying to use a comparison operator to assign a value. You can fix that by assigning the result to a boolean: boolean b = (a == i%x); Now you can run tests against the variable b. Or, you can embed the test into a conditional statement: if ( a == (i%x) ) { // do stuff } If you're trying to assign the value to the int a, then: int a = i % x; Note the single equal sign for assignment. Hope that helps! [ October 07, 2003: Message edited by: Jeff Bosch ] Give a man a fish, he'll eat for one day. Teach a man to fish, he'll drink all your beer. Cheers, Jeff (SCJP 1.4, SCJD in progress, if you can call that progress...) Linnea Palin Greenhorn Joined: Oct 02, 2003 Posts: 12 posted Oct 08, 2003 06:36:00 0 ok, i changed into a single equalsign.. also changed some other things, so please tell if i messed up something else instead. public class BooleanDemo { public static void main (String args[] ) { int i; int x; int a = (i % x); boolean b == (a<1); b = true; for (i=100; i<201; i++) for (x=2; x<i; x++) if (b = a>0) System.out.print(""); if (b) System.out.println(+i); } } and error is BooleanDemo.java [8:1] ';' expected boolean b == (a<1); ^ 1 error Errors compiling main. Why is that? I can't put a ; in the middle of the equal signs.. fred rosenberger lowercase baba Bartender Joined: Oct 02, 2003 Posts: 11853 18 I like... posted Oct 08, 2003 07:54:00 0 Linnea, I'm not an expert, but I'll tell you what I see. a "=" means "put the value on the Right Hand Side (RHS) into whatever is on the Left Hand Side (LHS)". the things on the right must all exist, but the thing on the left can be created in this step. a "==" means "tell me if these two things are the same". any variables must already be created and given a value (i'm pretty sure that is correct) before we do this comparison, but we can do a computation on both sides. it's generally only used in an if, while, for, or do-while loop. in your last post, you have the line boolean b == (a<1); are you trying to assign b the "truthfulness" of (a<1)?? if so, this needs a single "=". if you are trying to test whether b and (a<1) HAVE the same truthfullness, you need to GIVE b a value first. boolean b = true; b == (a<1); But this is not even valid (at least on my compiler). why? what does that second line do? it compares two things, but throws the result away. so the compiler says "what's the point of this???" go back through your code, and look at all your == and =. try to figure out when you want to SET some value, and when you wan to COMPARE two things. if you are comparing, what are you doing with the comparison??? hope this helps, or doesn't confuse you too much. i didn't want to just give you the answer. it's the former teacher in me... There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Herb Schildt Author Ranch Hand Joined: Oct 01, 2003 Posts: 239 5 posted Oct 08, 2003 08:02:00 0 Linnea: This line boolean b == (a<1); is incorrect. This is the same problem that you had before, which was explained by Jeff. When you declare a variable, you can give it an initial value, but to do this, you must use a single equals, like this: boolean b = (a<1); Furthermore, both of the variables a and x are used before they are assigned values. This is an error in Java. Also, this if statement if (b = a>0) System.out.print(""); is technically correct, but I doubt that it does what you intend. To control the if, there is no need for b. The outcome of a>0 produces a boolean result, which is sufficient in itself. For example, if (a>0) System.out.print(""); is better. What your original code did is compare a>0 and then assign this boolean result to b, with the overall value of the expression being the result of a>0. In my original reply to your question, I was not intending to suggest that you need to use a boolean variable, per se, to control an if statement. Only that all conditional statements in Java are controlled by boolean expressions. In Java, the result of any relational or logical operation, such as a < 10, produces a boolean result. Linnea: I think that you need to get a good introductory book on Java and read through the first few chapters. Doing so will answer a lot of your questions. Herb Schildt Author Ranch Hand Joined: Oct 01, 2003 Posts: 239 5 posted Oct 08, 2003 10:12:00 0 Linnea: After thinking about your posts, it occurred to me that your original question may have been referring more generally to the concept of boolean expressions, such those used to control an if statement, rather than to the boolean data type, specifically. Along these lines, here's some basic information on the if statement and the expression that controls it. Perhaps it will clear up some of the trouble that you have been having. The simplest form of the if is shown here: if(condition) statement; Here, condition must be an expression that produces a boolean (i.e., true/false) result. If the condition is true, then the statement is executed. If the. Of course, you can add an else clause to the if , as you have done in your code, to provide an alternative. You can also make the target of an if or else be a block of statments rather than a single statement. Java defines a full complement of relational operators that can be used in a conditional expression. They are shown here. < Less than <= Less than or equal > Greater than >= Greater than or equal == Equal to != Not equal Notice that the test for equality is the double equal sign. You can also use the logical operators: & AND | OR ! NOT ^ XOR && Short-circuit AND || Short-Circuit OR Stan James (instanceof Sidekick) Ranch Hand Joined: Jan 29, 2003 Posts: 8791 posted Oct 08, 2003 10:48:00 0 Glad to see those operators in the post above. If your instructor suggested a general look at booleans, you should be aware that there is a whole algebra for booleans. Frinstance, DeMorgan's transform: the opposite of (a and b) is (not a or not b) These can be hard to follow the first time through. If I say "To have a parade you must have a clear day (A) and temperature above 50 degrees (B)" then what would cancel a parade? A cloudy day (not A) or a cold day (not B). The short way to write it is !(A & B) = (!A | !B) If that kind of thing grabs your imagination, I'm sure that gang here will chime in with much more than I can remember right off. This is handy stuff to understand just writing and reading code, but down at the chip level it's all you can see for miles and miles. A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi Phil Chuang Ranch Hand Joined: Feb 15, 2003 Posts: 251 posted Oct 08, 2003 11:39:00 0 alrighty! Bring on the boolean algebra and karnaugh maps!! I agree. Here's the link: subject: Boolean Similar Threads EL Logical Operator type conversion Boolean type from 0 length string Que from KB book, chap 4 -> test Q 6 Clarification on : Overloading vs polymorphism Confuse BufferedReader type with String All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/394512/java/java/Boolean
CC-MAIN-2015-32
refinedweb
2,212
71.44
Subject: Re: [boost] [graph] ranges used by boost.graph not compatible with c++11 range-based-for From: Jeremiah Willcock (jewillco_at_[hidden]) Date: 2013-10-03 11:26:57 On Thu, 3 Oct 2013, Nathan Ridge wrote: >>> Sadly, I don't think there s a way of turn an std::pair of iterators into >>> a range-based-for-compliant range (see [1]). >> >> correct me if I'm missing something, but when performing ADL the >> namespaces of the template parameters >> are in the set of associated namespaces. So, one could simply overload >> begin/end in the namespace where >> the iterator types are defined ... > > You're right, a per-namespace solution is possible. I don't think it can > be done in general, though (i.e. for all std::pairs of iterators in one stroke). > > I'm not familiar with BGL - are the iterator types in question defined > in a namespace under the library's control, or could they be user-provided > types (in which the case the user would have to put the begin()/end() > in their own namespace)? For BGL-provided graph types, they are usually in a namespace under the library's control (I believe some are just std::vector<...>::iterator and similar, though). For user-defined graph types, they can be from anywhere, and requiring begin and end would break backward compatibility. -- Jeremiah Willcock Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2013/10/206789.php
CC-MAIN-2020-45
refinedweb
248
57.16
Issues Running package through gui shows gui error Here's my setup.py: from setuptools import setup APP = ['Valium.py'] DATA_FILES = ['check.svg'] OPTIONS = { 'argv_emulation': False, # some people say this helps running pkg in gui } setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) I'm building with: rm -rf build/ dist/ && python setup.py py2app --use-pythonpath --no-strip When I run ./dist/Valium.app/Contents/MacOS/Valium it works fine. But when I double click on Valium.app, I get a gui msg box that says "valium error". Screenshots at and. I'm using: OSX 10.8.2 I've not activate a virtual env so I'm using the system python (2.7.2). Although py2app is installed in a virtual env. * Details of the virtual env are listed in How do I track this down? Thanks so much! Dan The build likely won't work from the command-line as well when you remove "--use-pythonpath", when that variable is set the application honors the PYTHONPATH variable and that variable set in your build shell but won't be set when you double-click the app (because Finder won't read the shell startup files). For some reason py2app doesn't copy some files that you need in the app bundle, and that's likely and old bug that you run into because you end up using the py2app version that Apple ships. The easiest workaround is likely to either install a newer py2app versions in the system install of python, or install all software you need in the VE (where you have a newer version of py2app). I've fixed a number of bugs related to not copying resources/modules since the version that Apple uses was released (which was several months before 10.8.0 was released). Thanks for the reply! Here's the latest: I activated the virtualenv so it should be using py2app 0.7.3. If I build with OR without --use-pythonpath rm -rf build/ dist/; python setup.py py2app --use-pythonpath --no-strip or rm -rf build/ dist/; python setup.py py2app --no-strip and run it from the command line or by double clicking, I get a little gui error msg box that says I don't know why I was getting that error when importing Foundation last time and not now. I haven't installed anything new in the VE or recreated it. Does this new error message make sense to you? Thanks for all your help! I may have figured out why I wasn't seeing the import error: I think I was seeing it when running the program but when I was just testing all the variations of building the app, I was just building, and not running each. Anyway, where I'm at now is that the VE looks like At least for now, I've installed pyqt using brew so it's not in the VE. I added it to my PYTHONPATH so I can access it I'm able to run the app normally, with python valium.py. When I build with --use-pythonpath and NO --alias rm -rf build/ dist/; python setup.py py2app --use-pythonpath --no-stripand run it from the command line, I get the gui error about the python runtime not being found.. With --use-pythonpath and --alias, it works. With --alias and NO --use-pythonpath, I get a PyQt4 import error. With NO --alias and NO --use-pythonpath, I get the "no runtime located" gui error. I tried another approach. I symlinked pyqt4 and sip into my virtualenv: I cleared PYTHONPATH: export PYTHONPATH='' An alias build works A normal build fails with "Python runtime not could be located. You may need to install a framework build of Python, or edit the PyRuntimeLocations array in this application's Info.plist file." What is the output of the following command: Py2app needs a version of python that is dynamicly linked to the interpreter (either libpython.dylib or Python.framework). This is because py2app copies a small C executable into your app bundle that performs some initialization before starting the interpreter, and the code in that C exectuble uses the python shared library. Interesting. Summary: I got it to work by switching to python installed by brew. I played around with the apple/system python and couldn't get it to work. Then I: removed virtualenv and virtualenvwrapper which I had installed system wide, set up my /etc/paths to put brew stuff (/usr/local/...) first, brew installed python, and re-installed all my python stuff with a new VE that allows access to system packages. Then I was able to do a normal build: rm build/ dist/; python setup.py py2appand it works from the console and by double clicking in the gui. I've spent sooo many hours trying to get this working. It would be super helpful if py2app could detect if the python or environment is bad and give a clear error. I'll leave this open in case you have any questions, but feel free to close it. Thanks so much!! I try to enhance py2app whenever I get reports like this (either by adding code for support a new build/install configuration, or by adding an explicit warning). I don't fully understand yet why py2app didn't work for you. Part of the problem is that you ran with the (old, and fairly buggy) system install of py2app and there is little I can do about that because Guido closely guards the keys to his time machine :-). You also ran into problems with a newer release though, and I'll have to reproduce your environment to fully investigate that problem.
https://bitbucket.org/ronaldoussoren/py2app/issues/103/running-package-through-gui-shows-gui
CC-MAIN-2015-40
refinedweb
958
72.16
Created on 2006-11-25 16:27 by wmula, last changed 2010-08-25 14:25 by BreamoreBoy. Tkinter: canvas itemconfigure bug Consider following code: -- tkbug.py --- from Tkinter import * root = Tk() canvas = Canvas(root) text = "sample text with spaces" id = canvas.create_text(0, 0, text=text) text2 = canvas.itemconfigure(id)['text'][-1] print text print text2 --- eof --- This toy prints: sample text with spaces ('sample', 'text', 'with', 'spaces') The returned value is not a string -- Tk returns the same string as passed on creating item, but Tkinter split it. To fix this problem, internal method '_configure' have to be changed a bit: *** Tkinter.py.old 2006-11-20 16:48:27.000000000 +0100 --- Tkinter.py 2006-11-20 17:00:13.000000000 +0100 *************** *** 1122,1129 **** cnf = _cnfmerge(cnf) if cnf is None: cnf = {} ! for x in self.tk.split( self.tk.call(_flatten((self._w, cmd)))): cnf[x[0][1:]] = (x[0][1:],) + x[1:] return cnf if type(cnf) is StringType: --- 1122,1134 ---- cnf = _cnfmerge(cnf) if cnf is None: cnf = {} ! for x in self.tk.splitlist( self.tk.call(_flatten((self._w, cmd)))): + if type(x) is StringType: + if x.startswith('-text '): + x = self.tk.splitlist(x) + else: + x = self.tk.split(x) cnf[x[0][1:]] = (x[0][1:],) + x[1:] return cnf if type(cnf) is StringType: Maybe better/faster way is to provide Canvas method, that return a 'text' property for text items: --- def get_text(self, text_id): try: r = self.tk.call(self._w, 'itemconfigure', text_id, '-text') return self.tk.splitlist(r)[-1] except TclError: return '' --- There is a simple workaround: use itemcget. The error applies to other options as well: dash, activedash, disableddash, tags, arrowshape, font These options also may contain a space in their value. I collected this information from 'man n Canvas' from Tk 8.4.6 I hope I didn't miss any. BTW the itemconfigure document string is broken. Greetings, Matthias Kievernagel I no longer know what I meant with "document string is broken". It is not very clear, but not broken. It does not specify clearly the return value in the case of 'without arguments'. The problem is still present in trunk. Problem is as far as I understand it: Some return values from tcl are strings that may contain spaces (or are even certain to contain spaces). They are nonetheless translated to a Python tuple by Tcl_SplitList in _tkinter.c. There is no difference in syntax between tcl lists and tcl strings (with spaces). You have to have contextual knowledge to do the right thing. This knowledge cannot be attached to itemconfigure alone (for reconstitution of strings) because then information about whitespace is already lost. For return values known to be strings _tkinter/TkApp_SplitList must not be used. So I think it's a more general bug related to tcl string return values. Other Tkinter bugs may have the same explanation. Don't know how to resolve this without a lot of work. Matthias. The problem is actually on Tkinter side, not really tcl/tk fault here. Tkinter should be formatting that text option as "{text here}" when the value contains one or more spaces (it is actually fine to use this tcl formatting when there are no spaces either). To try this yourself, just change text to: text = "{sample text with spaces}" I can't look at Tkinter source right now to propose a correct solution, but will do later (today hopefully). Uhm, now I see.. Tkinter already formats it correctly, and you shouldn't be using itemconfigure for this task. If you try it directly in tk, like this: canvas .c .c create text 0 0 -text {a b} .c itemconfigure 1 -text You would get something like this: -text {} {} {} {a b} While .c itemcget 1 -text Will return the same as Python: "a b" Now what remains is to see how useful is to use itemconfigure for this, and if it is worth making canvas.itemconfigure(id)['text'][-1] return "a b" instead of ("a", "b"). Changing Misc._configure is too risky given there are no tests for Tkinter (and I find it weird sometimes, someone will still have to explain me why Tkinter plays with cnf and kw all the time), the other option involves leaving these special needings in Text and is something I dislike because other widgets could use these new things that would be added. Yet another option would be to start writing unit tests for Tkinter and much of these bugs would end up being caught and hopefully fixed properly.
http://bugs.python.org/issue1602742
crawl-003
refinedweb
761
65.83
bad. If you are on an archlinux based system, use pacman -S badtouch If you are on Mac OSX, use brew install badtouch To build from source, make sure you have rust and libssl-dev installed and run cargo install Verify your setup is complete with badtouch --help sudo apt-get update && sudo apt-get dist-upgrade sudo apt-get install build-essential libssl-dev pkg-config curl -sf -L | sh source $HOME/.cargo/env cd /path/to/badtouch cargo install A simple script could look like this: descr = "example.com" function verify(user, password) session = http_mksession() -- get csrf token req = http_request(session, 'GET', '', {}) resp = http_send(req) if last_err() then return end -- parse token from html html = resp['text'] csrf = html_select(html, 'input[name="csrf"]') token = csrf["attrs"]["value"] -- send login req = http_request(session, 'POST', '', { form={ user=user, password=password, csrf=token } }) resp = http_send(req) if last_err() then return end -- search response for successful login html = resp['text'] return html:find('Login successful') != nil end Please see the reference and examples for all available functions. Keep in mind that you can use print(x) and badtouch oneshot to debug your script. Decode a base64 string. base64_decode("ww==") Encode a binary array with base64. base64_encode("\x00\xff") Clear all recorded errors to prevent a requeue. if last_err() then clear_err() return false else return true end Execute an external program. Returns the exit code. execve("myprog", {"arg1", "arg2", "--arg", "3"}) Hex encode a list of bytes. hex("\x6F\x68\x61\x69\x0A\x00") Calculate an hmac with md5. Returns a binary array. hmac_md5("secret", "my authenticated message") Calculate an hmac with sha1. Returns a binary array. hmac_sha1("secret", "my authenticated message") Calculate an hmac with sha2_256. Returns a binary array. hmac_sha2_256("secret", "my authenticated message") Calculate an hmac with sha2_512. Returns a binary array. hmac_sha2_512("secret", "my authenticated message") Calculate an hmac with sha3_256. Returns a binary array. hmac_sha3_256("secret", "my authenticated message") Calculate an hmac with sha3_512. Returns a binary array. hmac_sha3_512("secret", "my authenticated message") Parses an html document and returns the first element that matches the css selector. The return value is a table with text being the inner text and attrs being a table of the elements attributes. csrf = html_select(html, 'input[name="csrf"]') token = csrf["attrs"]["value"] Same as html_select but returns all matches instead of the first one. html_select_list(html, 'input[name="csrf"]') Sends a GET request with basic auth. Returns true if no WWW-Authenticate header is set and the status code is not 401. http_basic_auth("", user, password) Create a session object. This is similar to requests.Session in python-requests and keeps track of cookies. session = http_mksession() Prepares an http request. The first argument is the session reference and cookies from that session are copied into the request. After the request has been sent, the cookies from the response are copied back into the session. The next arguments are the method, the url and additional options. Please note that you still need to specify an empty table {} even if no options are set. The following options are available: query- a map of query parameters that should be set on the url headers- a map of headers that should be set basic_auth- configure the basic auth header with {"user, "password"} user_agent- overwrite the default user agent with a string json- the request body that should be json encoded form- the request body that should be form encoded body- the raw request body as string req = http_request(session, 'POST', '', { json={ user=user, password=password, } }) resp = http_send(req) if last_err() then return end if resp["status"] ~= 200 then return "invalid status code" end Send the request that has been built with http_request. Returns a table with the following keys: status- the http status code headers- a table of headers text- the response body as string req = http_request(session, 'POST', '', { json={ user=user, password=password, } }) resp = http_send(req) if last_err() then return end if resp["status"] ~= 200 then return "invalid status code" end Decode a lua value from a json string. json_decode("{\"data\":{\"password\":\"fizz\",\"user\":\"bar\"},\"list\":[1,3,3,7]}") Encode a lua value to a json string. Note that empty tables are encoded to an empty object {} instead of an empty list []. x = json_encode({ hello="world", almost_one=0.9999, list={1,3,3,7}, data={ user=user, password=password, empty=nil } }) Returns nil if no error has been recorded, returns a string otherwise. if last_err() then return end Connect to an ldap server and try to authenticate with the given user. ldap_bind("ldaps://ldap.example.com/", "cn=\"" .. ldap_escape(user) .. "\",ou=users,dc=example,dc=com", password) Escape an attribute value in a relative distinguished name. ldap_escape(user) Connect to an ldap server, log into a search user, search for the target user and then try to authenticate with the first DN that was returned by the search. ldap_search_bind("ldaps://ldap.example.com/", -- the user we use to find the correct DN "cn=search_user,ou=users,dc=example,dc=com", "searchpw", -- base DN we search in "dc=example,dc=com", -- the user we test user, password) Hash a byte array with md5 and return the results as bytes. hex(md5("\x00\xff")) Connect to a mysql database and try to authenticate with the provided credentials. Returns a mysql connection on success. sock = mysql_connect("127.0.0.1", 3306, user, password) Run a query on a mysql connection. The 3rd parameter is for prepared statements. rows = mysql_query(sock, 'SELECT VERSION(), :foo as foo', { foo='magic' }) Prints the value of a variable. Please note that this bypasses the regular writer and may interfer with the progress bar. Only use this for debugging. print({ data={ user=user, password=password } }) Returns a random u32 with a minimum and maximum constraint. The return value can be greater or equal to the minimum boundary, and always lower than the maximum boundary. This function has not been reviewed for cryptographic security. rand(0, 256) Generate the specified number of random bytes. randombytes(16) Hash a byte array with sha1 and return the results as bytes. hex(sha1("\x00\xff")) Hash a byte array with sha2_256 and return the results as bytes. hex(sha2_256("\x00\xff")) Hash a byte array with sha2_512 and return the results as bytes. hex(sha2_512("\x00\xff")) Hash a byte array with sha3_256 and return the results as bytes. hex(sha3_256("\x00\xff")) Hash a byte array with sha3_512 and return the results as bytes. hex(sha3_512("\x00\xff")) Pauses the thread for the specified number of seconds. This is mostly used to debug concurrency. sleep(3) Create a tcp connection. sock = sock_connect("127.0.0.1", 1337) Send data to the socket. sock_send(sock, "hello world") Receive up to 4096 bytes from the socket. x = sock_recv(sock) Send a string to the socket. A newline is automatically appended to the string. sock_sendline(sock, line) Receive a line from the socket. The line includes the newline. x = sock_recvline(sock) Receive all data from the socket until EOF. x = sock_recvall(sock) Receive lines from the server until a line contains the needle, then return this line. x = sock_recvline_contains(sock, needle) Receive lines from the server until a line matches the regex, then return this line. x = sock_recvline_regex(sock, "^250 ") Receive exactly n bytes from the socket. x = sock_recvn(sock, 4) Receive until the needle is found, then return all data including the needle. x = sock_recvuntil(sock, needle) Receive until the needle is found, then write data to the socket. sock_sendafter(sock, needle, data) Overwrite the default \n newline. sock_newline(sock, "\r\n") You can place a config file at ~/.config/badtouch.toml to set some defaults. [runtime] user_agent = "w3m/0.5.3+git20180125" [runtime] # requires CAP_SYS_RESOURCE # sudo setcap 'CAP_SYS_RESOURCE=+ep' /usr/bin/badtouch rlimit_nofile = 64000 The badtouch runtime is still very bare bones, so you might have to shell out to your regular python script occasionally. Your wrapper may look like this: descr = "example.com" function verify(user, password) ret = execve("./docs/test.py", {user, password}) if last_err() then return end if ret == 2 then return "script signaled an exception" end return ret == 0 end Your python script may look like this: import sys try: if sys.argv[1] == "foo" and sys.argv[2] == "bar": # correct credentials sys.exit(0) else: # incorrect credentials sys.exit(1) except: # signal an exception # this requeues the attempt instead of discarding it sys.exit(2) GPLv3+
https://awesomeopensource.com/project/kpcyrd/badtouch
CC-MAIN-2021-21
refinedweb
1,402
57.57
AND EVERYONE ELSE :D AND EVERYONE ELSE :D UPD: The problem is fixed now I'm trying to register for the upcoming Codeforces Round #602 (Div. 2, based on Technocup 2020 Elimination Round 3) but I can't for some reasons. When I click the Register button, nothing happens. Is it a bug or something? What should I do? I have a question. I have enabled "are test points allowed?" from the general description. Then I got a new column to specify test points individually. Please look at the picture bellow: And the picture of Groups points policy: Now, what to do? I have 20 test cases for group 1, 30 for group 2 and 50 for group 3. I want to give partial points to each of the individual groups. I want to give 20 points to group 1, 30 to group 2 and 50 to group 3. But how to do this? I can't find any option for this. And I set 1 point for each individual test cases. Will they be added up? I mean I have 20 cases for group 1 and each of them worths 1 point. Will they add up and make 20 points for that particular group? Edit: This is my Checker code: Checker Today. You have to solve these problems to develop DP skills Different types of Dynamic programming problems in one blog Thank You So Much. Recently I tried to solve Loj-1339 and found some ideas. The main problem is :). It can be done with Mo' algorithm. If I was told to find number of distinct integers from i to j then I can easily find the answer using Mo. but problems occur when max-min is wanted. I'm not able to keep track of the maximum community. When I add values, I can do this, but while removing,I can't find out a way to track the maximum. One of my friends told me to make a sqrt decomposition for the array. After adding/removing, somehow we have to maintain the aux[] array, and for queries, suppose we have sqrt(n) blocks, then from i to j, if any block is fully inside i and j then we just take the max from the block Id and if it doesn't , we just loop through the array. But I am not sure about this idea, also I can't implement it properly, can you please help me about this problem ? Thanks in advance. Hello. I think codeforces contribution calculation system so complicated.I don't know how it is calculated. Can anyone tell me ? Is there any formulas or something else just like rating change system ? Sometimes I see that some of my comments has +45 contribution, but my contribution increases only 4-5 and sometimes +3/+4 changes the main contribution more than that. Suppose, I get downvote(let x) in a comment, but my main contribution changes randomly(I think so). I didn't get any kinds of patterns. What's your opinion about it ? I wonder how it is calculated. If you know anything about it, please inform me, I'll be grateful to you. Thanks in advance. Recently I have to solve a problem like : given an array, update n times in range [Li..Ri] and then output the array .I updated them using segment tree and I found the array by querying indexes one by one. It took nlog(n). Can I do it in O(n) ? And additionally, can I find the condition of any index in O(1) ? Thanks in advance. Recently, I learned Bitmask DP and used only a variable to see if some block was visited or not. But now I want to do it using std::bitset. Will it be more or less efficient than the first one ? If yes or no, why ? I'm just confused. I think bitset should be fine. and I want to use it because it is easy to use.What's your opinion ? thanks in advance. Hello , Can someone set IOI style problems using codeforces polygon ? Actually can we make a contest using IOI style problems, where we will be able to use subtasks and other stuffs ? Hello, I have been trying a lot to solve Trail Maintence(LOJ), but I am getting RTE every time. I have been trying in many different ways, but at last I am getting RTE, don't know why. Can anyone help me debugging it please? Thanks in advance /** *First of all, I will be taking input until the whole graph is connected, and also, *For every query, I have to output -1. *After getting the whole graph connected, I have to perform my first MST. *then I will output the mst as well. *then I have to input every remained query and for every steps, the last added edge *will make e cycle, and we have to remove the largest edge form the graph, which is unnecessary.. then.. output the ans it :D **/ #include <bits/stdc++.h> using namespace std; const int N = 503; struct pii{ int a; int b; int c; pii(){a = 0,b = 0,c = 0;} pii(int m,int n,int o){ a = m; b = n; c = o; } };bool operator<(pii a,pii b){return a.c < b.c;} int n,pos,size,ans,parent[N],q,u,v,w; vector<pii>mst; pii ara[N+12]; void makeset(){for(int i = 0; i < N;i++)parent[i] = i;} int find(int n){return n==parent[n]?n:parent[n]=find(parent[n]);} void Union(int a,int b){ parent[find(a)] = find(b);} int first_mst() { sort(mst.begin(),mst.end()); makeset(); size = mst.size(); int sum = 0; for(int i = 0; i < size;i++){ if(find(mst[i].a) != find(mst[i].b)){ Union(mst[i].a,mst[i].b); ara[pos++] = pii(mst[i].a,mst[i].b,mst[i].c); sum += mst[i].c; } } return sum; } void mst2() { size = pos; sort(ara,ara+size); makeset(); int indx = -1; int sum = 0; for(int i = 0; i < size;i++){ if(find(ara[i].a) != find(ara[i].b)){ Union(ara[i].a,ara[i].b); sum += ara[i].c; } else{ indx = i; } } if(indx == pos-1){ pos--; } else if(indx != -1){ pii mm = ara[pos-1]; ara[indx] = mm; pos--; } printf("%d\n",sum); } int main() { //freopen("in.txt","r",stdin); int t,caseno = 0; scanf("%d",&t); while(t--){ mst.clear(); pos = 0; makeset(); scanf("%d%d",&n,&q); printf("Case %d:\n",++caseno); int k = n; while(q--){ scanf("%d%d%d",&u,&v,&w); mst.push_back(pii(u,v,w)); if(find(u) != find(v)){ k--; Union(u,v); } if(k == 1)break; printf("-1\n"); } int ans = first_mst(); printf("%d\n",ans); while(q--){ scanf("%d%d%d",&u,&v,&w); ara[pos++] = pii(u,v,w); mst2(); } } return 0; } Hello, I have tried a lot to solve MKTHNUM-spoj. For solving this , I have slightly changed the problem statement, it goes like this : you are asked to find a number such that there are k-1 numbers less than that in range [l...r] Then I made a merge sort tree and sorted the initial array and Binary Searched it. My time complexity is : O((log2(n))2) . I am getting Runtime Error in case 11 (i think). But couldn't find the bug :'( . Updt: Now I am getting wrong answer. first 14 test cases ran smmothly here goes my code : #include<bits/stdc++.h> #define all(v) v.begin(),v.end() using namespace std; const int N = 100099; vector<int>tree[N*3]; int ara[N+12]; void build(int at ,int l,int r) { if(l == r){ tree[at].push_back(ara[l]); return; } int mid = (l+r)/2; build(at*2,l,mid); build(at*2+1,mid+1,r); merge(all(tree[at*2]),all(tree[at*2+1]),back_inserter(tree[at])); } int query(int at,int L,int R,int l,int r,int indx) { if(l > R or r < L)return 0; if(L >= l and r >= R){ int pp = upper_bound(all(tree[at]),ara[indx])-tree[at].begin(); return pp; } int mid = (L+R)/2; return query(at*2,L,mid,l,r,indx) + query(at*2+1,mid+1,R,l,r,indx); } int main() { int n,q,l,r,k; scanf("%d%d",&n,&q); for(int i= 1; i <= n;i++){ scanf("%d",&ara[i]); } build(1,1,n); sort(ara+1,ara+1+n); while(q--){ scanf("%d%d%d",&l,&r,&k); int high = n,low = 1,mid,ans = -1; int cnt = 0; while(low <= high){ mid = (low+high)/2; int pp = query(1,1,n,l,r,mid); if(k <= pp){ ans = mid; high = mid-1; } else low = mid+1; } printf("%d\n",ans); } return 0; } Now I want to solve some problems and learn advanced algos related to it. please help me by giving me more sources. Hello codeforces , I tried to solve Light Oj : 1110 — An Easy LCS but getting TLE , here is My code please help me to reduce my runtime :) thanks for advance :) I saw someone to use a function instead of cin/scanf to input a number; like that : int n; n = in(); where in() function is given bellow : template <typename T> T in(){char ch;T n = 0;bool ng = false;while (1){ch =getchar();if (ch == '-'){ng = true;ch = getchar();break;}if (ch>='0' && ch<='9')break;}while (1){n = n*10 + (ch - '0');ch = getchar();if (ch<'0' || ch>'9')break;}return (ng?-n:n);} ? i don't know what is faster , i also don't know , how scanf or cin works , can anyone please tell me ???? Hello codeforces , hello everybody , you all know , ACM ICPC is gonna be started , :) , and many of the participants are also CF users , and let's start a game for them , according to codeforces ratings , can you tell me which team is first ??? All you have to do , calculate their average ratings , and add them ( add the three ratings ) and divide them by 3 , Because there are 3 members in a team , and finally you will get a average rating , and according to them , you have to find the maximum rating of all the teams , and comment the name of their country and team/University name . and if you get board , you should not do it , you will suggest the avg ratings of a team in the comment box and i will fix it . the winner will be included soon i wish :D :D happy coding everybody :D :D good luck 1. Hello codeforces , I have recently started learning BFS/DFS . and I tried solving following problem : Uva 10653 and My code please someone help me debug my problem Hello codeforces , I am a beginner , and I started learning Dynamic programming a few days ago . I haven't learned any specific algorithm yet , but i wanna practice some DP problems , can anyone suggest some easy DP problems please ?? Hello everybody , I have tried a lot to solve the light oj 1087 — Diablo problem . problem link : My test sample output is not matching , I have tried a lot , I am a noob , please help me to debug my code . Why does it occur ?? my code : Please help me , Downvote ?? It's okay . But please help me .................. Hello , I can't realize why the judge output is much strange !!! problem link : my code : judge status : in my compiler , answer is okay , but where is the problem ?? I have also tested in different online compilers also I have tried a lot to solve the "milking cow" problem of USACO . But WA in test 7 . I don't know what is the problem , i have tried multiple approaches , but it still wrong in test 7 . can you please help me to find my bug please ???? my code : problem source : please please please help me guys , I have tried my best . Just one question , am I doing any kinds of segmentation violation ?? such as -> wrong ara indexing or accessing memory out of bounds etc etc ??? memory limit : 256 mb code : Hello everybody , please help me , i was solving problems in light oj , "curious Robin Hood" and getting wrong answer. problem link : code link : i am a beginner and i recently learned a bit about segment tree , i used segment tree in this solution , but i can't find my faults , can anyone please help me ? everything seems okay , where is the problem ? can you explain ? problem link : my code : I was solving problems in light online judge , and got "output Limit Exceeded" ! but I still don't know about it . Can you please tell me what does it mean ? and when it happens ? problem link : my code : Hello everybody , how are you ? Today I am going to describe some contents (programming) Which can be helpful for contestants. At first I assume that you all know about C/C++ language . But It doesn't matter . I think you also know about conditional logic , loop , datatype(input/output) , strings, array, function , basic bitwise operation etc . I think then you should learn lots of Mathematical terms such as : Prime number , GCD , LCM , Eular's Totirnt Function , Bigmod , Modular inverse , Extended GCD , combinatorics, permutation , combination , inclusion exclusion principle ,probability , Expected value , Base conversation , Big integer , cycle , Gawssian elimination etc etc........ Then..... you have to learn about sorting and searching . like : Insertion sort , bubble sort , merge sort , selection sort , merge sort , county sort etc . NOTE : THERE IS A THING NAMED "qsort" function is C language.....don't use it . There is an algorithm named "anti-qsort" . If you use it in codeforces , anyone can easily hack your solution because anti-qsort algorithm generates worst case for qsort and in the worst case , it's time complexity will be O(n^2) . But you can use Standerd Template Library . Then........ You have to learn Binary search , Backtraking , ternary search etc... Then....... you have a task to learn lots of data structures . Such as : Linked list , Stack,Queue,heap , vector,graph,tree,segment tree , binary search tree , sqare root segmentation , disjoint set union , Lazy with proposition , lazy without proposition , Binary indexed tree , map , set , pair etc etc.... Then........ You haVe to learn greedy technique , Dyanamic programming , Graph algorithm , Flow , Adhoc , Geometry etc etc , I have summarized the contents bellow : 1. Dynamic Programming 2. Greedy 3 .Complete Search 4 . Flood Fill 5 . Shortest Path 6 . Recursive Search Techniques 7 . Minimum Spanning Tree 8 . Knapsack 9 . Computational Geometry 10 .Network Flow 11 . Eulerian Path 12 . Two-Dimensional Convex Hull 13 . BigNums 14 . Heuristic Search 15 . Approximate Search 16 . Ad Hoc Problems If you clearup all those above , you will find a great programmer inside you , but mind it , It is not an easy tusk....You have to practice a lot . If you don't practice , you will never be able to be a good programmer .....!!! SO , NO MORE TODAY ,,,, STAY WELL EVERYBODY,,,,ALWAYS TRY TO HELP OTHERS...HAPPY CODING !
https://codeforces.com/blog/Ahnaf.Shahriar.Asif
CC-MAIN-2021-39
refinedweb
2,519
74.69
On Jan 18, 8:22 am, "W. eWatson" <wolftra... at invalid.com> wrote: >. If you haven't already, you really should check out IPython: It's an enhanced interactive shell packed full of convenience features. For example, output is by default paged, so you'll get the behaviour you want from help() and dir() straight away. However, you'll probably end up using IPython's help instead: <object>? will not only display the docstring, it'll provide metadata about the object, such as its base class, the file it was defined in and even the namespace it exists in. %page <object> will pretty print the object and run it through the pager. %timeit <statement|expression> is an _exceptionally_ handy wrapper around the timeit module. %bg <statement> runs in a separate, background thread There's a directory stack, macros, code in history can be edited, profiling & debugging, functions can be called without parenthesis (nice if you use IPython as a shell replacement), and you can easily capture the results of a command line call to a variable. But yes, along with all that, it pages object printing :)
https://mail.python.org/pipermail/python-list/2010-January/564714.html
CC-MAIN-2016-36
refinedweb
186
68.5
Ned here again. Today I’m going to talk about a couple of scenarios we run into with the ConflictAndDeleted folder in DFSR. These are real quick and dirty, but they may save you a call to us someday. Scenario 1: We need to empty out the ConflictAndDeleted folder in a controlled manner as part of regular administration (i.e. we just lowered quota and we want to reclaim that space). Scenario 2: The ConflictAndDeleted folder quota is not being honored due to an error condition and the folder is filling the drive. Let’s walk through these now. Emptying the folder normally It’s possible to clean up the ConflictAndDeleted folder through the DFSMGMT.MSC and SERVICES.EXE snap-ins, but it’s disruptive and kind of gross (you could lower the quota, wait for AD replication, wait for DFSR polling, and then restart the DFSR service). A much faster and slicker way is to call the WMI method CleanupConflictDirectory from the command-line or a script: 1. Open a CMD prompt as an administrator on the DFSR server. 2. Get the GUID of the Replicated Folder you want to clean: WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderconfig get replicatedfolderguid,replicatedfoldername (This is all one line, wrapped) Example output: 3. Then call the CleanupConflictDirectory method: WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid='<RF GUID>'” 4. At this point the ConflictAndDeleted folder will be empty and the ConflictAndDeletedManifest.xml will be deleted. Emptying the ConflictAndDeleted folder when in an error state We’ve also seen a few cases where the ConflictAndDeleted quota was not being honored at all. In every single one of those cases, the customer had recently had hardware problems (specifically with their disk system) where files had become corrupt and the disk was unstable – even after repairing the disk (at least to the best of their knowledge), the ConflictAndDeleted folder quota was not being honored by DFSR. Here’s where quota is set: Usually when we see this problem,). To fix this issue: - Follow steps 1-4 from above. This may clean the folder as well as update DFSR to say that cleaning has occurred. We always want to try doing things the ‘right’ way before we start hacking. - Stop the DFSR service. - Delete the contents of the ConflictAndDeleted folder manually (with explorer.exe or DEL). - Delete the ConflictAndDeletedManifest.xml file. - Start the DFSR service back up. For a bit more info on conflict and deletion handling in DFSR, take a look at: Staging folders and Conflict and Deleted folders (TechNet) DfsrConflictInfo Class (MSDN) Until next time… – Ned “Unhealthy love for DFSR” Pyle Ned, great. I actually was looking for this today, because we had an issue where the "normal" cleanup for some reason did not occur and 130 GB of data was roaming around the ConflictAndDeleted folder. The WMI call did the trick, thanks. regards, Marcel Heek There will be a small service fee for my psychic powers. 🙂 – nedpyle Hi Ned! Now that you´re into the cleaning business =), what about the Staging folder under DfsrPrivate folder. Are there any special considerations to have in mind if you want to clean that folder if there is data that seems to be stuck in it. Regards Stefan Nilsson Hi there, Thank you very much for the info. our DFS conflict folder bloated to a massive 130GB, and it almost gave me a panic attack! 😛 I initially tried the WMI script, weirdly enough, it did not work (gasp!) I am now doing a manual delete (DEL command), cross fingers and toes that it works as planned and won’t bring down our infra. 😛 will update the blog again. Cheers! Nathaniel Thanks Ned for your tip. it worked and our DFS is now back to a stable size. Cheers Nathaniel Ok, bump into this one… I seem to have a problem with doing the initial command (the one where we get the GUIDs). My namespace is \rodic.comdfs, when I do the path dfsrreplicatedfolderconfig part I get an error 0x8004100e INVALID NAMESPACE . Hmm, any ideas? What am I doing wrong? Since you didn’t post your syntax I cannot be sure. It sounds like you are confusing your DFS Namespace with the WMI namespace of the command. You do not need to alter the command line to obtain the Replicated folder GUIDs. Run it as is. The same is true for the command that you run to purge the ConflictAndDeleted folder. The Namespace we are targeting with WMIC is the WMI namespace "\rootmicrosoftdfs" not your DFS Namespace. For example: wmic /namespace:\rootmicrosoftdfs path dfsrreplicatedfolderinfo where "replicatedfolderguid=’81CD0A52-F132-4A91-BEF7-205262F526BA’" call cleanupconflictdirectory. The only portion of command line you need to change is the GUID "81CD0A52-F132-4A91-BEF7-205262F526BA" with your replicated folder’s GUID. wmic /namespace:\rootmicrosoftdfs path dfsrreplicatedfolderinfo where "replicatedfolderguid=’Darko’s RF GUID goes here’" call cleanupconflictdirectory. This KB also explains how to run the command.;EN-US;951010, I do it again and work now. I'm not sure that what's happen. Anywey,Thanks! regards, Aidan I stumbled on this article by accident but it prompted me to check my folder sizes. Turned out I had several that had exceeded their quotas many times over. This was on a Server 2003 R2 so I wasn't sure if your trick would work. It did brilliantly though so thanks very much. It would be great to work this little function into a button on the GUI (as shown on your 3rd image) "It would be great to work this little function into a button on the GUI (as shown on your 3rd image)" Agreed. Maybe someday… – Ned
https://blogs.technet.microsoft.com/askds/2008/10/06/manually-clearing-the-conflictanddeleted-folder-in-dfsr/
CC-MAIN-2018-05
refinedweb
946
66.13
NAME putenv - change or add an environment variable SYNOPSIS #include <stdlib.h> int putenv(char *string); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): putenv(): _SVID_SOURCE || _XOPEN_SOURCE DESCRIPTION The putenv() function returns zero on success, or non-zero if an error occurs. ERRORS ENOMEM Insufficient space to allocate new environment. CONFORMING TO SVr4, POSIX.1-2001, 4.3BSD NOTES The putenv() function is not required to be reentrant, and the one in libc4, libc5 and glibc 2.0 is not, but the glibc 2. SEE ALSO clearenv(3), getenv(3), setenv(3), unsetenv(3), environ(7) COLOPHON This page is part of release 3.01 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/intrepid/en/man3/putenv.3.html
CC-MAIN-2013-20
refinedweb
126
67.45
Difference between revisions of "Python" Revision as of 13:44, 30 March 2013 Python is a powerful object-oriented scripting language that is available on every platform and has extensive community support.The official Python documentation can be found at. As with Macros, Python uses commands to interact with modo, but unlike macros, Python can be used to query those commands for their values and usefully interpret the results. Because it can query commands, Python can make excellent use of the ScriptQuery system provided by the query command to obtain lower-level information from the various subsystems, including extracting specific mesh information that is not otherwise accessible through commands themselves. modo currently uses the Python 2 line. The Python 3 line is not compatible with Python 2.x, and is not currently supported. Contents - 1 Python API - 2 Python Header - 3 The lx Module - 4 ScriptQuery Service - 5 Error Handling with Exceptions - 6 Monitors - 7 sys.exit - 8 Executing Python Scripts from Other Python Scripts - 9 External Modules - 10 More Information Python API modo 701 introduces a deeply integrated Python API. This provides SDK-level access to the modo through Python, allowing full-on plug-ins to be created entirely through scripting, without any knowledge of C++ itself. Much of this article concerns the older fire-and-forget approach to Python scripting. This can be mixed with the new Python API, but it is important to be aware that commands can only be executed at certain times when it is safe, and the Python API is usually much faster than command queries. For these reasons you will likely find that lx.eval and other functions in this article aren't used as much anymore. Python Header All Python scripts start with a simple header so that the interpreter can recognize them. # python The lx Module The modo extensions to Python are encapsulated in the lx module. In older versions of modo, you first needed to import the lx module like so, but starting with modo 301 both this and sys are now implicitly imported. import lx The Python implementation in modo is a bit different than that of Lua or Perl. As Python is an object oriented language, all of the standard functions are encapsulated as methods in the lx object. Error handling is performed with exceptions instead of result codes. This is explained in detail later on. lx.trace modo 401 introduced lx.trace support for Python. The lx.trace method can be used to toggle tracing on and off, causing the results of many of the lx methods to be output to the Scripting sub-system of the Event Log Viewport. Passing True turns tracing on, while False turns it off. It can also be used to test if tracing is on or not by not passing no arguments. Note that these booleans states are case sensitive Python boolean keywords, and must be properly capitalized to avoid syntax errors. tracing = lx.trace(); # See if tracing is on lx.trace( True ); # Turn on tracing lx.out Text can be output to the Event Log with 'lx.out. Any arguments passed will be concatenated before being written out. An empty argument list outputs a blank line. # python # Print an blank line lx.out() # Print a label followed by the Python version string lx.out( "Python Version: ", sys.version ) lx.eval lx.eval is used to both execute and query commands in Python. Simply pass in a command string with the appropriate arguments using standard command syntax, and the command is executed or queried. # Execute lx.eval( "layout.togglePalettes" ) # Query q = lx.eval( "layout.togglePalettes ?" ) Unlike Lua and Perl, all error reporting is done through exceptions. When executing a command, lx.eval will always return None. When querying a command, this returns either an array of elements or a single value, depending on the number of elements in the query. To see if the result of lx.eval is an array of values or a single value, you can call the Python type() function on the variable. # python q = lx.eval( "material.name ?" ) if type( q ) == tuple: # Array of values else: # Single Value lx.eval1 and lx.evalN There are many cases where a query may return one or many elements, often depending on what is currently selected in the application. Since lx.eval'’s return value depends on the number of elements, you would need to handle both cases in a separate branches of code, which can get tedious. To avoid this issue, you can use lx.eval1 and lx.evalN. lx.eval1 always returns None or one element directly, even if the query returned a list of elements (in which case the first element is returned). lx.evalN will always return None or an array, even if there is only one element. # python # Query and get a single value q1 = lx.eval1( "material.name ?" ) # Query and get an array of values qN = lx.evalN( "material.name ?" ) Executing a command with lx.eval1 or lx.evalN operates identically to lxeval. lx.command Rather than execute a command with a string, you can pass the command name and each of it’s arguments directly through lx.command. This is often simpler than having to construct a command string from scratch. lx.command makes use of name/value pairs by taking advantage of Python’s ability to specify arguments and their values by name. # python lx.command( "view3d.shadingStyle", style="wire" ) lx.test Python allows for easy testing of ToggleValue commands with lx.test. This is used in a similar manner to lxqt in Perl and Lua, returning True if the ToggleValue is on and False if it is off. # python isActive = lx.test( "tool.set prim.cube on" ) lx.option() and lx.setOption() The lx.option and lx.setOption methods allow the script to set properties that determine how the other lx methods operate. This operates similarly to Perl, where you pass the property to be modified and the way in which it is to be modified. The primary difference between Python and Perl is that queryAnglesAs (the only property currently supported) defaults to radians for backwards compatibility with previous versions of modo. # python lx.setOption( "queryAnglesAs", "radians" ) lx.out( lxoption( "queryAnglesAs" ) ) lx.arg and lx.args Argument parsing is available through the lx.arg and lx.args methods. lx.arg returns the raw argument string that was passed into the script. lx.args parses the argument string and returns an array of arguments for easier processing. # python argsAsString = lx.arg() argsAsTuple = lx.args() ScriptQuery Service Accessing ScriptQuery interfaces from Python can be accomplished using the query command as normal. However, Python also provides a lower level system through the lx module’s lx.Service method and the associated service object. This also has less overhead than using the query command. lx.Service The first step in using a ScriptQuery interface is obtaining a Service object with lx.service. This takes the service’s name string as it’s only argument. # python s = lx.Service( "layerservice" ) The Service object has five methods: name, select, query, query1 and queryN. You can have as many service objects as like, each with their own selections. This is in contrast to the query command, which shares its selection as global state among all clients, and thus has but a single selection. s.name The name method returns the name of the Service. This is the same string that was passed into lx.Service. # python s = lx.Service( "layerservice" ) lx.out( "Service Name: ", s.name() ) s.select The select method takes the place of the query command’s select argument, and is used to pass selectors to the ScriptQuery interface. This take two arguments, the attribute class name string and the selector string. If the attribute doesn’t require a selector it can be omitted. Both arguments can be omitted to clear the selection. The attribute class name is the part of the attribute before the period. For example, the class of the layer.name attribute is layer. The Service object will also accept the full attribute name and extract the class itself. This example sets the selectors for the layer class attributes to the foreground layers. # python s = lx.Service( "layerservice" ) s.select( "layer", "fg" ) s.query The query method queries the Service object for the value of a previously selected attribute. This returns either a single value or an array of values depending on the number of values in the query. # python s = lx.Service( "layerservice" ) s.select( "layers", "fg" ) name = s.query( "layer.name" ) s.query1 and s.queryN The query1 method can be used to always get the first element of a query, while queryN will always return an array of queries. This allows for consistent handling of attributes that may return a variable numbers of elements. Error Handling with Exceptions Python error handling in modo is done entirely through exceptions. If one of the lx methods fails because of an unknown command or service, a NameError exception is thrown. If a command fails to execute, a RuntimeError exception is throw. These can be handled with standard Python try and except keywords. The LxResult code of the command failure can be read with sys.exc_info(). # python # Set up our try block try: # First execution: user.defNew creates a new user value lx.command( "user.defNew", name="MyValue" ) # Second execution: user.defNew fails because a value with that name already exists lx.command( "user.defNew", name="MyValue" ) # Handle exceptions except RuntimeError: lx.out( "Command failed with ", sys.exc_info()[0] ) Monitors Progress bars in Python are handled through the Monitor object. This provides the same progress bar functionality as in Perl and Lua. lx.Monitor A Monitor object is obtained through a call to lx.monitor. The optional argument is the total number of steps in the progress bar. # python m = lx.Monitor( 42 ) The Monitor object has two methods, init and step. Although you can create as many monitors as you like, the internal mechanism for progress bars will cause only the first one to do anything. However, this will change in the future to allow multiple monitors to be used simultaneously. m.init The init method sets the total number of steps in the monitor and resets the current position to 0. # python m.init( 100 ) m.step The step method increments the current monitor position by 1 if the argument is omitted; otherwise, it increments by that number of steps. It also lets the application check for input, and will thrown an exception if the user clicked the abort button. # python m.step( 2 ) Monitor Example Here we have a simple example of a monitor in Python. The loop is simply to allow enough time to pass for the monitor to appear. (Note: you may need to tweak the range of the inner loop depending on the speed of your system; on very fast systems, the loop may complete before the monitor appears. Or you can be a good programmer and use a proper time delay function instead of a busy loop, but this is sufficient for our example). # python # Create the monitor. We could pass the total number of steps here, too m = lx.Monitor() # Set the total number of steps m.init(1000) # Do a loop, iterating over our 10000 monitor steps for i in range(0,10000): # Step the monitor. We could omit the argument to step by 1 rather than explicitly specifying it. m.step(1) # Pause briefly via a busy loop, so the monitor will be displayed. Monitors only appear if the # operation takes a sufficiently long time (a couple of seconds or more). for i in range(0, 1000000): a=6 sys.exit Python scripts can be exited by using the standard Python sys.exit() call. Simply calling sys.exit() with no arguments exits the script with no error. # python sys.exit() # python Failure can also be reported by passing an argument string in the form of "code:message", although either the code or message can be omitted. The code is a standard message code used in the SDK, each with a different meaning. This can be passed as an integer, a hexadecimal string, or one of the following common codes. Here are some examples of the default "ok" code. sys.exit() # The default is LXe_OK sys.exit( "0" ) # Equivalent to LXe_OK sys.exit( "0x00000000" ) # Also equivalent to LXe_OK sys.exit( "LXe_OK" ) A code can be combined with a message by adding a colon. For failure codes, this results in an error dialog that displays the failure along with the message. # python sys.exit( "LXe_FAILED:Script failed" ) A message can also be set without a code, although this isn't generally useful as the message isn't currently displayed anywhere. # python sys.exit( ":Insert Message Here" ) Executing Python Scripts from Other Python Scripts modo supports running one Python script from another Python script by simply using the standard @ syntax with lx.eval(). Since Python normally has a single main interpreter, modo makes use of the Python API’s sub-interpreter mechanism. This allows an almost completely independent state to exist for each executing script. Care should be taken when using functions in low-level modules, such as os.close(), which may affect both the currently running script and the script(s) that executed it. However, for general day-to-day scripting, this is unlikely to be an issue. More information on sub-interpreters can be found at the official Python web site. External Modules Python support in modo includes the ability to load external Python modules. You may include extra modules with your scripts by placing them in any imported resource directory. These are specified using the <import> tag in a config file. All of these paths are added to the Python sys.path module search path, thus ensuring that your specific modules will be found. The path to the script itself is also added to sys.path. Using external Modules & the site-packages directory
https://modosdk.foundry.com/index.php?title=Python&diff=prev&oldid=8584
CC-MAIN-2020-40
refinedweb
2,340
58.99
Contents Provide a compact syntax to define simple “struct-like” (or “record-like”, “bean-like”) classes. The resulting classes are very similar to namedtuple, but mutable, with a nicer syntax, more flexibility and more features. Here’s a summary of the features: See also the motivation section for other implementations of the concept, specially MacroPy which was the inspiration for this project and uses a very different approach. Currently only Python 2.7 is supported. The usual: pip install rbco.caseclasses Or: easy_install rbco.caseclasses Let’s start by creating a simple case class: >>> from rbco.caseclasses import case >>> >>> @case ... class Person(object): ... """Represent a person.""" ... def __init__(self, name, age=None, gender=None): pass The declared __init__ is just a stub. The parameters defines which fields the class will have and its default values. The __init__ method is replaced by a new one, which takes care of assigning the values of the fields. The constructor works as expected, according to the provided __init__ stub: >>> Person('John') Person(name='John', age=None, gender=None) >>> Person('John', 30, 'm') Person(name='John', age=30, gender='m') >>> Person(name='John', age=30, gender='m') Person(name='John', age=30, gender='m') >>> Person('John', gender='m') Person(name='John', age=None, gender='m') Note that in the string representation the fields are in the same order as defined in the constructor. The docstring of the class is preserved: >>> Person.__doc__ 'Represent a person.' The signature of the constructor is not preserved. The resulting __init__ method signature is a generic one, taking only *args and **kwargs: >>> from inspect import getargspec >>> getargspec(Person.__init__) ArgSpec(args=['self'], varargs='args', keywords='kwargs', defaults=None) However the docstring contains the original signature: >>> Person.__init__.__doc__ 'Original signature: (self, name, age=None, gender=None)' It’s not possible to create a case class without a constructor: >>> from rbco.caseclasses import case >>> >>> @case ... class Foo(object): pass Traceback (most recent call last): ... RuntimeError: Case class must define a constructor. Instances are mutable: >>> p = Person('John') >>> p Person(>> p.age = 35 >>> p Person(name='Bob', age=35, gender=None) However it’s not possible to assign to unknown attributes: >>> p.department = 'sales' Traceback (most recent call last): ... AttributeError: 'Person' object has no attribute 'department' This is because of the __slots__ declaration: >>> p.__slots__ ['name', 'age', 'gender'] Structural equality is supported: >>> p1 = Person('John', 30) >>> p2 = Person('Bob', 25) >>> p1 == p2 False >>> p1 != p2 True >>> p2.>> p2.age = 30 >>> p1 == p2 True >>> p1 != p2 False >>> p2.>> p1 == p2 False A copy-constructor is provided: >>> p1 = Person('John', 30) >>> copy_of_p1 = p1.copy() >>> p1 Person(name='John', age=30, gender=None) >>> copy_of_p1 Person(name='John', age=30, gender=None) >>> p1 is copy_of_p1 False >>> p2 = p1.copy(name='Bob', gender='m') >>> p2 Person(name='Bob', age=30, gender='m') Conversion from/to dictionary is easy. The as_dict method return an OrderedDict: >>> p1 = Person('Mary', 33) >>> p1 Person(name='Mary', age=33, gender=None) >>> p1.as_dict() OrderedDict([('name', 'Mary'), ('age', 33), ('gender', None)]) >>> Person(**p1.as_dict()) Person(name='Mary', age=33, gender=None) Conversion from/to tuple is also possible: >>> p1 = Person('John', 30) >>> p1 Person(name='John', age=30, gender=None) >>> p1.as_tuple() ('John', 30, None) >>> Person(*p1.as_tuple()) Person(name='John', age=30, gender=None) Case classes are very much like regular classes. It’s possible to define any kind of custom members. The most common case should be adding a custom instance method: >>> import math >>> @case ... class Point(object): ... def __init__(self, x, y): pass ... ... def distance(self, other): ... return math.sqrt((self.x - other.x)**2 + (self.y - other.y)**2) >>> p1 = Point(0, 0) >>> p2 = Point(10, 0) >>> p1.distance(p2) 10.0 Other kinds of class members are supported as well: >>> @case ... class Example(object): ...>> e.class_attribute 'some value' >>> Example.static_method() This is an static method. >>> Example.class_method() This is a class method of the class Example. Let’s create a base case class and a derived one: >>> @case ... class Person(object): ... def __init__(self, name, age=None, gender=None): pass ... ... def present(self): ... print "I'm {}, {} years old and my gender is '{}'.".format( ... self.name, ... self.age, ... self.gender ... ) ... >>> @case ... class Employee(Person): ... def __init__(self, name, age=None, gender=None, department=None): pass It’s necessary to repeat the fields of the base class, but you would have to do that anyway if you were implementing the case classes manually. Methods from the base class are inherited: >>> p = Person('John', 30, 'm') >>> p.present() I'm John, 30 years old and my gender is 'm'. >>> e = Employee('Mary', 33, 'f', 'sales') >>> e.present() I'm Mary, 33 years old and my gender is 'f'. Instances of Person and Employee will always be considered different, since employees have an extra field: >>> p = Person('John') >>> e = Employee('John') >>> p == e False Overriding a base class method works as expected: >>> @case ... class ImprovedEmployee(Employee): ... def present(self): ... super(ImprovedEmployee, self).present() ... print 'I work at the {} department.'.format(self.department) ... >>> ie = ImprovedEmployee(name='Mary', department='marketing', age=33, gender='f') >>> ie.present() I'm Mary, 33 years old and my gender is 'f'. I work at the marketing department. It’s possible to override the standard case class methods (__repr__, __eq__, etc). For example: >>> @case ... class Foo(object): ... def __init__(self, bar): pass ... ... def __eq__(self, other): ... return True # All `Foo`s are equal. ... >>> Foo('bar') == Foo('baz') True It’s even possible to call the original version on the subclass method: >>> @case ... class Foo(object): ... def __init__(self, bar): ... pass ... ... def __repr__(self): ... return 'This is my string representation: ' + super(Foo, self).__repr__() ... >>> Foo('bar') This is my string representation: Foo(bar='bar') It’s not possible to override the __init__ method, because it’s replaced when the @case decorator is applied. If a custom constructor is needed using the CaseClassMixin can be a solution. The classes created by the @case decorator inherits from CaseClassMixin. >>> from rbco.caseclasses import CaseClassMixin >>> issubclass(Person, CaseClassMixin) True The CaseClassMixin provides all the “case class” behavior, except for the constructor. To use CaseClassMixin directly the only requirement the subclass must match is to provide a __fields__ attribute, containing a sequence of field names. This can be useful if greater flexibility is required. In the following example we create a case class with a custom constructor: >>> class Foo(CaseClassMixin): ... __fields__ = ('field1', 'field2') ... ... def __init__(self, field1, *args): ... self.field1 = field1 + '_modified' ... self.field2 = list(args) ... >>> Foo('bar', 1, 2) Foo(field1='bar_modified', field2=[1, 2]) The constructor of a case class cannot be customized because it’s replaced when the @case decorator is applied. See the section about CaseClassMixin for an alternative. It’s not possible to assign to unknow fields because of the __slots__ declaration. The constructor cannot take *args or **kwargs: >>> @case ... class Foo(object): ... def __init__(self, **kwargs): pass Traceback (most recent call last): ... RuntimeError: Case class constructor cannot take *args or **kwargs. See the section about CaseClassMixin for an alternative. The idea for this project came from MacroPy. It provides an implementation of case classes using syntactic macros, which results in a very elegant way to define the case classes. The motivation was to provide similar functionality without resorting to syntactic macros nor string evaluation (the approach took by namedtuple). In other words: to provide the best implementation possible without using much magic. The comparison to MacroPy can be summarized as follows: Advantages: Disadvantages: Other implementations of the “case class” concept (or similar) in Python exists: Some implementation ideas were considered but discarded afterwards. Here some of them are discussed. This means using a function to generate the class. This would be something like this: Person = case_class('Person', 'name', age=None, gender=None) The first problem with this idea is that there’s no way to preserve the order of the fields. The case_class function would have to be defined like this: def case_class(__name__, *args, **kwargs): ... **kwargs is a unordered dictionary, so the order of the fields is lost. To overcome this the following syntax could be used: Person = case_class('Person', 'name', 'age', 'gender', age=None, gender=None) I thinks this syntax is not elegant enough. I don’t like the repetition of field names and to have field names represented as both strings and parameter names. Perhaps something like this would work too: Person = case_class('Person', ['name', 'age', 'gender'], {'age': None, 'gender': None}) But again I think the syntax is not elegant. Also, some functionalities would be difficult to support using this syntax, namely: Custom members. This would mean complicate the signature of the case_class function or add the custom members after the class is created. Like this: Person = case_class('Person', ...) def present(self): print ... Person.present = present Not very elegant. Inheritance. This would require a new parameter to the case_class function, to allow to pass in a base class. This would end the necessity to define an empty constructor. The syntax would be like this: @case(name, age=None, gender=None) class Person(object): 'Represent a person.' The same problem faced by the function syntax arises: field ordering is not preserved, since the case function would have to accept a **kwargs argument, which is an unordered dict. Alternate syntaxes, similar to the ones presented for the functional syntax, could overcome the field ordering problem. However I think the solution using a __init__ stub to define the fields is more elegant. The syntax would be like this: @case class Person(object): name = NO_DEFAULT_VALUE age = None gender = None Again, there’s no way to preserve the order of the fields. The case function would have to retrieve the class attributes from Person.__dic__, which is unordered. Maybe something like this would work: @case class Person(object): __fields__ = ( ('name', NO_DEFAULT_VALUE), ('age', None), ('gender', None) ) However I think the solution using a __init__ stub to define the fields is more elegant. Please fork this project and submit a pull request if you would like to contribute. Thanks in.
https://pypi.org/project/rbco.caseclasses/
CC-MAIN-2017-09
refinedweb
1,657
50.33