text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
This section will describe you a simple example of Inheritance in Java.
AdsTutorials
In this section we will read about the Inheritance using a simple example.
Inheritance is an OOPs feature that allows to inherit the features of parent into the child. Inheritance allows the reusability of code that saves the time of programmer while coding. Technically, inheritance can be described as the derivation of one class from another class. In inheritance an object contains all the properties and behaviors of its parent class. An inheritance established the IS-A relationship between two objects.
Types of Inheritance
In the programming language there are various types of inheritance. These are as follows :
But, when we talk about the Inheritance in Java the Multiple inheritance is not supported. This reduces the complexity and simplifies the language.
To apply the feature of inheritance in Java the keyword extends is used. Following syntax is used for applying the inheritance :
class className extends parentClassName { ..... }
Example
Here I am giving a simple example which will demonstrate you about how to use the inheritance feature in Java programming. This example will demonstrate you the code reusability advantage of inheritance. In this example, you will see that we have created the various classes. In these classes the parent/super class is the class which is extend by the another class called base/child/derived class. The base class can acquire all the properties and behaviors of parent class and can use it without declaring or defining again inside the base class. In this example I have created a class named ClassA.java (this class will be the parent class) where I have declared the properties and defined the methods to set the properties values and display the result. Then I have created another class named ClassB.java (this class will extend the ClassA so, it will be the base class) where I have defined a method for adding two numbers, then I have created a main class where I have created an object of ClassB then I have used all the properties and methods of ClassA and then performed the add operation. Use of all the properties and behaviors of ClassA in main class using ClassB object specifies the reusability of code. Such as I didn't write the code two times for showing the addition result or declare the properties again.
ClassA.java
public class ClassA { int a; int b; int result; public void setValue(int a, int b) { this.a = a; this.b = b; } public void showResult() { System.out.println("Addition of two numbers = "+result); } }
ClassB.java
public class ClassB extends ClassA { public void add() { result = a+b; } }
MainClass.java
public class MainClass { public static void main(String args[]) { ClassB b = new ClassB(); b.setValue(6, 4); b.add(); b.showResult(); } }
Output
When you will compile and execute the above class i.e. MainClass.java then the output will be as follows :
Posted on: July 26, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Inheritance Example In Java
Post your Comment | http://roseindia.net/java/beginners/inheritance-example-in-java.shtml | CC-MAIN-2018-17 | refinedweb | 519 | 56.25 |
If you are a Haskell convert from Lisp, JavaScript or any other dynamic programming language, you might miss eval function of those languages.
eval lets us load code dynamically and execute it on the fly. It is commonly used to provide user-defined plugins and is a very handy tool for software extension.
Dynamic evaluation is not limited to dynamic languages. Even Java supports dynamic class loading through class loaders. It seems Haskell does not support dynamic evaluation as it is a strictly defined language. But GHC allows us to compile and execute Haskell code dynamically through GHC API.
hint library provides a Haskell interpreter built on top of GHC API. It allows to load and execute Haskell expressions and even coerce them into values.
hint provides a bunch of monadic actions based on
InterpreterT monad transformer.
runInterpreter is used to execute the action.
runInterpreter :: (MonadIO m, MonadMask m) => InterpreterT m a -> m (Either InterpreterError a)
Type check
We can check the type of a Haskell expression using
typeOf.
λ> import Language.Haskell.Interpreter λ> runInterpreter $ typeOf "\"foo\"" Right "[GHC.Types.Char]" λ> runInterpreter $ typeOf "3.14" Right "GHC.Real.Fractional t => t"
Import modules
hint does not import prelude implicitly. We need import modules explicitly using
setImport. For qualified imports, use
setImportQ instead.
λ> runInterpreter $ do { setImports ["Prelude"]; typeOf "head [True, False]" } Right "Bool" λ> runInterpreter $ do { setImportsQ [("Prelude", Nothing), ("Data.Map", Just "M") ]; typeOf "M.empty" } Right "M.Map k a"
Evaluate expressions
eval function lets us evaluate Haskell expressions dynamically.
λ> runInterpreter $ do { setImports ["Prelude"]; eval "head [True, False]" } Right "True" λ> runInterpreter $ do { setImports ["Prelude"]; eval "1 + 2 * 3" } Right "7"
The result type of evaluation is
String. To convert the result into the type we want, use
interpret with
as. Here
as provides a witness for its monomorphic type.
λ> runInterpreter $ do { setImports ["Prelude"]; interpret "head [True, False]" (as :: Bool) } Right True λ> runInterpreter $ do { setImports ["Prelude"]; interpret "1 + 2 * 3" (as :: Int) } Right 7
Load modules
It is also possible to load modules dynamically.
Here’s a small module
Foo stored in
Foo.hs file.
module Foo where f = head g = tail
We can load
Foo using
loadModules function.
setTopLevelModules ensures that all bindings of the module are in scope.
import Control.Monad import Language.Haskell.Interpreter ex :: Interpreter () ex = do loadModules ["Foo.hs"] setTopLevelModules ["Foo"] setImportsQ [("Prelude", Nothing)] let expr1 = "f [1, 2, 3]" a <- eval expr1 liftIO $ print a let expr2 = "g [1, 2, 3]" a <- eval expr2 liftIO $ print a main :: IO () main = do r <- runInterpreter ex case r of Left err -> print err Right () -> return ()
Executing this program prints
"1" "[2,3]"
because
f is
head and
g is
tail. | http://kseo.github.io/posts/2017-01-19-fun-with-hint.html | CC-MAIN-2017-17 | refinedweb | 450 | 58.18 |
Red Hat Bugzilla – Bug 214222
hidden GtkStatusIcon never reappears under KDE
Last modified: 2007-11-30 17:11:47 EST
If you have a GtkStatusIcon and set_visible to false and then later set it to
true, it never shows up in the KDE tray. Simple test case:
#!/usr/bin/python
import gtk, gobject
def show_it(t, *args):
print "showing it"
t.set_visible(True)
return False
t = gtk.status_icon_new_from_file("/usr/share/pirut/pixmaps/puplet-updated.png")
t.set_visible(False)
gobject.timeout_add(5000, show_it, t)
gtk.main()
Can somebody verify that this is fixed with GTK+ trunk ? If it is,
I would be willing to backport the relevant changes to the stable branch.
Definitely fixed with gtk+ 2.11.6
*** Bug 214516 has been marked as a duplicate of this bug. ***
Any chance to see this fixed in F-7's gtk2 too? We're (still) getting lots of
bugs/complaints there.
*** Bug 380071 has been marked as a duplicate of this bug. ***
Once I have a working backported fix, sure. So far, no luck
Thanks.
*** Bug 380571 has been marked as a duplicate of this bug. ***
I posted bug 380071 ... the big thing I'm wondering about is, why did
bluetooth-applet etc. suddenly begin getting started as part of log-in after the
KDE upgrade? I never saw this problem before the recent update ...
Because KDE was patched to honor the XDG autostart directory. | https://bugzilla.redhat.com/show_bug.cgi?id=214222 | CC-MAIN-2016-22 | refinedweb | 234 | 69.58 |
Hi,
I have created a new .resx file in the \Resources folder of my Xamarin.iOS application, but I can't figure out how to read the values in it.
The method used inside the PCL for the solution -
Application.Current.Resources[ key ] - doesn't work, and the only example the docs seem to cover is localisation, which is a bit different.
I having a 'thick Friday', so any help would be appreciated.
Kind wishes ~ Patrick
Answers
Update...
According to the documentation here, this class appears to be what I need:
System.Resources.ResXResourceReader. But it doesn't seem to be available.
Try adding your RESX file in your project, not in your resource folder. Do not copy from resource folder to your project, since the build options will be mismatched. Add new RESX file and access it with the name of the resource file.
Thank you for the reply, @ashalva. I'm not clear exactly what you mean. Should I be using
ResXResourceReaderor reading the file using a different method.
I'm sorry - I don't think my original question was clear. It is the access to the class
ResXResourceReaderthat I don't appear to be able to get. I haven't got as far as worrying about the actual file path and name yet.
Incidentally, what is the correct build option for a .resx file? I though it was BundleResource.
It seems that a .resource file is a .resx file that has been compiled into binary. To compile it, you use resgen and then the resulting binary has to be bundled with your application. I think you then need the ResXResourceReader class to access the compiled binary. Xamarin does not provide this (it is in the System.Resources namespace, in assembly System.Windows.Forms). However, if you just want to use a .resx file to store a few configuration values, all of this might seem like overkill.
An alternative is to simply to use .resx files as text-based resources (actually they are XML). This allows you to use Visual Studio's neat editor to add name-value pairs (and give them an optional comment). The .resx files stay in the normal resource location for your project (on iOS, this is the \Resources folder), and no special build action is required, because files placed here are automatically made available to your running application.
Doing this will incur a small size overhead (because XML is bigger than binary) and there is a small runtime cost (because the XML has to be parsed), but these are negligible as long as your files are not too big.
I've written a small class, which neatly wraps up the reading of these raw XML .resx files. It works a treat, with none of the hassle of trying to incorporate resgen into my build process.
I have not had the time to turn it into a proper online article, but I have attached the code, if you would like to see how to do this.
Kind wishes ~ Patrick
PS - If you find this useful, it might be worth marking this as an answer, so others can find it and use it. | https://forums.xamarin.com/discussion/comment/180645 | CC-MAIN-2019-18 | refinedweb | 528 | 74.79 |
Compatibility
- 3.2.1 and master5.35.25.15.04.2
- 3.2.1 and masteriOSmacOS(Intel)macOS(ARM)LinuxtvOSwatchOS
An iOS library to natively render After Effects vector animations
L.
For the first time,! Lottie also supports native UIViewController Transitions out of the box!
Here is just a small sampling of the power of Lottie
Lottie supports CocoaPods and Carthage (Both dynamic and static). Lottie is written in Swift 4.2.
You can pull the Lottie Github Repo and include the Lottie.xcodeproj to build a dynamic or static library.
Add the pod to your Podfile:
pod 'lottie-ios'
And then run:
pod install
After installing the cocoapod into your project import Lottie with
import Lott.
// swift-tools-version:5.1 import PackageDescription let package = Package( name: "YourTestProject", platforms: [ .iOS(.v12), ], dependencies: [ .package(url: "", from: "3.1.2") ], targets: [ .target(name: "YourTestProject", dependencies: ["Lottie"]) ] )
And then import wherever needed:
import Lottie
DEAD_CODE_STRIPPING = NOin your
Build Settings() **NOTE: For MacOS you must set the
Branchfield to
lottie/macos-spm
If you have doubts, please, check the following links:
After successfully retrieved the package and added it to your project, just import
Lottie and you can get the full benefits of it.
As of 3.0 Lottie has been completely rewritten in Swift!
For Objective-C support please use Lottie 2.5.3. Alternatively an Objective-C branch exists and is still active.
The official objective c branch can be found here:
Also check out the documentation regarding it here:
The Lottie SDK does not collect any data. We provide this notice to help you fill out App Privacy Details. | https://swiftpackageindex.com/airbnb/lottie-ios | CC-MAIN-2021-10 | refinedweb | 269 | 58.08 |
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "horror"
-
-
- I have a confession to make.
I do most of my java coding in comic sans ;-;
IT MAKES ME HAPPY FOR SOME REASON27
-
-.16
-
-
- Want to read a horror story?
Number of computers
1900: 0
1950: 100
1980: 10000
1990: 1000000
2000: 1000000000
2010: 1000000000000000000
2020: 1
2030: 1
2040: 1
2050: 126
-
- So a user reported they couldn't login to our site, so I reset their password to:
uI+ffRT7M2NAzo8uOqzf4QxO3I9tj8PJ4TS0n8zDV7I
And sent them back an email with the updated password. A few minutes later, they replied and said that password didn't work. They even tried a different web browser, etc. I tried it myself, and sure enough, it didn't work.
I spent the next several hours trying to figure out why the password didn't save properly, or why the logic didn't compare them correctly. Perhaps it was some sort of caching issue? Oh the horror.
As it turns out, the problem was a maxlength of 28 on the login form field:
<input type="password" name="password" value="" maxlength="28"/>
I don't know who wrote that code, but it sure wasn't me.23
-
-
-
-
-9
-
- "Oh haha I found out why your PC isn't turning on, you forgot to turn on the power switch!"
*flick*25
-
-
- in an alternate universe there exists a clientRant, where clients come together and share their horror stories of dealing with devs.6
- Ok so my server was really dirty and slow and I got tired of it.... So I decided to give it a bath.26
-
-
- At my first job, I got tired of having to type a user name and password every time I debugged the web application. Thinking I was clever, I put in a hack so that if you launched the application with the query string "?user=Administrator" it would log you in as the administrator. So much typing saved!
A couple days after the next release, I realized it shipped like that. In absolute horror, I walked into my boss' office, closed the door, and told him the tale of my mistake.
He just looked back at me, and after a moment or two said, "Loose lips sink ships."
And that was it.5
- Me-
/ / / / / / / / /
if (bool == true)
{
bool = false;
}
if (bool == false)
{
bool = true;
}
/ / / / / / / / /
My friend-
/ / / / / / / / /
bool = !bool;
/ / / / / / / / /
*not a real story*15
- Once I had been sick for a couple of weeks. Then came back to this abomination
Didn't make me feel any better 😨9
- A guy wanted me to convert his Unity project from Android to ios.
He was offering less than 50$....
It dosent end here....
He sent me the files in a zip on Google Drive.....
Also the project size was around 3 gb......
😢😢2
-
-
-
-
-!8
-
- Moved into my a new house. No internet yet, and my mobile data barely connects. Real life horror story.8
-'d like to see a horror movie called "Exception", where the main character experiences exceptions which should not happen according to the code's logic.11
- 'hey honey look what i made! It works!'
- fiance looks, error messages over error messages, program crashes, files disappear, data loss, pure horror
To this day I don't know what happened. I had to restore my project and re-write the last half hour
- So I found this in the code:
Inet4Address.getByAddress (new byte[] {0x7f,0x00,0x00,0x01})
It's localhost. Why the actual fuck would you declare 127.0.0.1 as a fucking byte array!16
- Finally got a new job, but it's already a horror story not even 2 hours in (making this while on break)
Everyone here is an Intern, IT? Interns, Designers? Interns, HR? Interns.
The Person who I should've worked with got fired yesterday, and now I have to work all of his shit up from 0, Documentation? Fragmental, a few things here and there, but nothing really.
IT security also doesn't exist in the slightest, there is an Excel sheet called "Master_Passwords" and every single password is in Plaintext, written out for everyone to see. (at least they used "strong" passwords)
And the place also looks run down, theres PC's, Laptops, Mics, Cables etc. lying literally everywhere no-one knows what works and what doesn't (since everyone is an intern)
Not to mention the "Server Room" is an absolute mess itself, cables hanging from literally anywhere, powerstrips are ontop of servers, each rack has like 2 or 3 2U Servers, (in a 40u Rack) and there are 10 of them!5
-
- Instead of just accepting my proposal about upgrading the server - the customer I have ranted about earlier decided that it would be a good idea to try to install PHP 7.0 on a Debian 5 machine.
I feel horror when I read the bash_history3
-
-
- Right i'm building a new interview test. Short and simple, I give the potential hire a laptop with a browser open on this:
If he / she recoils in horror, they are hired.20
- So I watched this video that tries to convince people, that the jQuery library isn't really best practice anymore and showed how you can achieve basic tasks with vanilla JS, aswell as some frameworks (Vue, React, Angualar) and how they handle interactions with the DOM.
It also talked about how nearly every JS question on SO has top answers using jQuery, and how that's a bad thing.
But what I found in the comment section of this video was pure horror: So-called "Developers" defending jQuery to the death. Of course there were some people who made some viable arguments (legacy code, quick & dirty projects), but the overwhelming majority were people making absurd claims and they seemed quit self-confident.
GOSH!
Want an example?
Look:22
- Meanwhile at a presentation.
Me: This is the new feature that I've been working on.
App: "There was an error loading the application!"
Me: (shrieks in horror!)4
- I love Linux, use it almost daily
, but those windows update horror storys aren't happening when I'm using windows.
I think those days are gone, and people are just hating.27
-
-
- This is the state of desktop computing: When a web browser uses twice more RAM than a full virtual machine.
To be fair, I did have 5 windows with >10 tabs each, but still...13
- The fact that this week's rant topic exists is the exact reason I never do freelance work. It always ends in horror.
I don't believe good fullstack devs exist. A good junior will assess their talents/passions and specialize to become a senior. They'll keep checking the rest of the market from the corners of their eyes and start moving when necessary. But eventually, focus and specialization is a must.
If that's true for the field of development... why would I invest time in marketing, support, sales, PR, legal?
Maybe you can earn a bit more gross income in freelance work — but I see my employer as a service provider, I "hire" them so I don't have to manage phone calls, lose sleep over court cases, play product owner or worry about getting paid.9
- A friend of mine once showed me a really beautiful and fast login page he made.
After some time I took a look at the front-end code and realized why it was so fast... He queried the database in the JS code...
The user and pass for the database was in the HTML of the website...
I think he deployed that page...5
-
-
- Connect a pen drive, format it successfully. Connect to a new machine to copy data and see the data exists.
Crap! which drive did I formatted :
-
- Manager: With all the horror stories why are we even developers?
Me: Because once we get part the horror, we become geniuses.
Manager: So what you're saying is that being a developer is like taking a crap after being constipated for three weeks?2
-
-".30
-
- Sacrificed my Diwali and made a website for my first client and did not get any payment.
I was new and less confident.
Well this is my freelance horror story.
Now I fucking demand money from client UPFRONT.1
- There are numerous horror stories about Windows updates, but Mac updates aren't better. Yesterday I updated to Catalina (reluctantly), and when the computer restarted I was silently praying "Please don't fuck up my VMs".
Naturally, it fucked up VMware...8
- Just had to install XP to adapt our VB6 client code for a new release of the server. Oh the horror of it all!2
- !Rant
Story, only read this if you feel like wasting your time
Ok so I live in a small village and it takes around 15 minutes to get to the next city by car. I can't drive yet because I am 15 and so I would need my parents to drive me there. There are also no buses anymore which drive to the city after 2pm.
Most of my friends live in that city, none of them code. We always meet on a discord server and then play games or do some other shit. Today I got online at around 3pm and when I joined the discord server they asked me if I wanted to go see the movie 'IT' with them tonight, I said yeah of course (I am a huge fan of horror movies), but only if my parents come home early enough to drive me there.
Time passed and then my last friend left the discord server because he had to walk to the cinema.
I was the last one still on the server and also the one with the farest way to the cinema. I already knew that my parents wouldn't come home in time anymore and so I decided to just start coding something. I usually code while listening to some music and so I switched over to spotify to choose a playlist. I just randomly clicked on the first playlist spotify recommended me and the song started playing: 'Sound of silence'.
Fuck you spotify algorithm.
I know that not being able to go to the cinema with your friends is a fucking stupid reason to be sad but I just feel very sad right now. Sitting alone in my dark room staring at my computer screen.
Sorry for wasting your time18
-
-
-
- You always see fail2ban message:
There has been n failed attempts"
But... Have you ever thought about how many successful logins have been?2
- How’s this for a horror story? Adding a new feature to a 6,000 line and 100% undocumented stored procedure in a 20+ year old Oracle database.2
- Saw this popup in my feed today. Glad I've never had to deal with code like this.
Sauce: @LoganDice@mastodon.social6
-
- Have you ever looked at code you had written years or even decades ago and asked yourself either:
1) How this this even work?
2) What the hell does this do?
3) or, I can’t believe I wrote that. (In horror or serendipitously)5
-
- /*************
* adds 1 to i
***************/
i = i + 1
Everyone has seen at least one comment like this.6
- I really want to make a pen that will shock the user with 1000V when the pen detects that the user is writing code on paper.4
-
-
-
- When you are finish doing the UI of the app then your UI/UX expert suddenly want to change everything, because he/she saw much cooler ui in the pinterest. The Horror!2
- Security Horror Story:
A password authenticator which is case-insensitive and all special characters are treated as the same value. As a bonus, all passwords are truncated to 4 characters.2
- Boi did I forget what a horror is to deal with Wndows...
I just wanted to shutdown a laptop to replace the SSD and a wifi card. Prepared everything, clicked on the [start] and there were only "Update and *" options. Wha the hell I thought, I could spare a few minutes. It's just a software update - should not take long!
Little did I know...
That was 45 minutes ago and It's still shutting down. And I'm just sitting with that screwdrived in my hand, looking at that blue screen and waiting. I feel stupid
UPDATE: I gave up. Long-pressed the POWER button. que sera, sera, right?
Lights go out. I press POWER again to boot it back up (forgot to save smth else). And it boots up back to the "SHUTTING fucking DOWN" AGAIN!!!26
-
- The pandemic is more serious than I thought. Out of boredom, I started writing a book. Post-apocalyptic sci-fi horror. I have 75 standard pages behind me and I still have something to write.
I guess a lot of people trying to do the same1
-
- TIL that Python's "everything is an object" mentality allows you to do
def some_function():
some_function.variable = "abc"
print(some_function.variable)
> abc9
- (As a freelancer I was asked to do a couple of tasks on legacy code)
Let’s check this code, how bad can it be?
- all of the following: unreadable mess, no auto linting
- tests: some are there cause there’s not enough automation, others are poorly named
- frontend: somehow a genius made a react component for every variable in the store which only passes the variable to the child (wtf)
- backend: death by best practices
- ci/cd: “we have it but it’s broken”
Let’s fucking goooo 😎
Diagnosis: my therapist is getting rich
Chances to not cry tonight: close to zero
At least they pay well 🤷♂️
- That sinking feeling of horror when you are helping a student with a problem in their assignment only to see major logic errors that prove they dont actually understand what they are doing. * drinks wiskey in prep for marking *5
- just love how upset the sticker looks with ThinkPad.
Thanks you guys! Your envelope managed to overcome even the horror we know as the Israeli Postal Company 👏4
-
-
-
-
- Colleague just found out that redislabs silently truncates passwords to 20 characters. Naturally our development department was horrified. It also explains why we couldn't access our account anymore. Who would even come up with such a thing?1
- Okay it's FUCKing rant time... FUCK you prestashop!
FUCK your utterly bizarre "coding standard"
Also a big FUCK your config files, since when did config files start to include application logic, multiple includes/requires and modification of super-globals. When did I miss that memo?
This file is full of so much FUCKing horseshit, my FUCKing testicles hurt.
FUCK your "module overrides", yes let's duplicate 20-30MB senseless horror code into another folder, just so we can modify one line, without having future updates breaking our stuff.
And your attempt to migrate to a symphony stucture is FUCKing pathetic, do it properly, or don't do it at all.. FUCKtards..
I know wordpress can be bad, but this...
Prestashop takes FUCKing lousy, headache/cancer- giving, piece of crapware to the next FUCKing level.
I wouldn't even wish this FUCKing upon my worst enemy.2
-
- I've been watching a few horror games made with RPG Maker. People say games made using it are shit which of course, makes me want to try it even more. It's on sale right now and I could use something to go along with my plan to practice pixel art again. I read a lot of reviews that say it doesn't require programming. It would be a nice new relaxing hobby if that's the case but I searched for what language can be used for it anyway, just in case. Older versions say Ruby and I was like, "Perfect, I remember Ruby. It was pretty neat. I don't mind working with it." I checked the newer versions and found out that they moved from Ruby to JS a long time ago.
After several hours of "Fuck no, I'm not touching that on my free time", I decided to grow up and get it anyway. Who knows? Maybe the games I make are so shitty that I wouldn't have to script at all. It would just be a hobby and maybe a more comfortable way of telling a story.
Well, I hope I enjoy doing this. The monotony of pandemic living is driving me insane. I skipped a whole Saturday sleeping.15
-
- Playing horror games
| Lights on -> cant see shit on the screen and more vulnerable to jumpscares
| Lights off -> the atmosphere of the room is generally spookier and i get scared of misc. noises
Oh well13
- Microsoft fucking Sharepoint.
How can software so shitty exist?
To upload an image, I need to F12 and increase the dimensions on an iframe so I can click the save button.
Have any Sharepoint horror stories to share?4
- My freelance horror story is that I don't work freelance, but rather for a corporation where I am a faceless entity known as asset.1
-
- So this i quite a big project, hundreds of files everywhere, pages are rendered using multiple files.
This is one of the latest created page, it was made by my boss, and it just give me the creeps.
I REALLY don't know how he always comes up with shit like these.
I just hate having more than 5 closing tags in sequence...6
- Forgive me for I have sinned.
I feel like I need a bleach shower after having had to write these methods in my Dart codebase12
- In horror movies. Characters do things that they them-self know it could kill them. Like getting in basement.
As a developer we all know to never deploy code on Friday. Here we are taking the wrong turn."
Said to a new team member before they embarked on a journey of pain as I took them through a huge web app made with jQuery (think: 10K lines of DOM manipulation horror), WCF, and sadness.
- today it finally happened.
Npm modules broke my system and / or endangered the security of my system.
Installed a global cli utility
That utility depends on package A
That depends on package B
That fucking install a bin called sudo
Yeah.. You heard it right a bin called sudo.
This bin goes in the global module folder that is piped in your path variable.
Now everytime you type sudo you are running somebody else code instead of your system utility.
I am shivering and at loss of swear words.
Opened an issue on the cli that started this matrioska game of horror.
Who the fuck tought that a bin called sudo would be a good fucking idea?
Oh and yes is even an harmless package that try to provide the sudo experience for windows (I went in to check the code of course..)
And I frigging need that cli for work
For now I aliased the sudo in my bashrc still i feel vulnerable and naked now.10
- Imagine the horror of learning C programming with manual memory management, pointer arithmetic and without your cool utility libraries after programming for 2 years in Python just becoz it's in the fukin syllabus!!14
-
- 5
-
- No matter how hard I try it will always find its way.
Sometimes it's fun to debug sometimes it's
- Today I feel like a coding vampire, let me create a new Xamarin project and boOoOost with the code!!
*Creates a clean project, finds 1492 errors* well... f*ck it4
-
-
- Working on codebase of a 20+ year old system that the company I work for bought five years ago and in that time there’s been no refactoring, no security updates, no attempt to create automated testing (there is none), new features have just been built on the codebase with no regard for quality and it’s just spun into the horror cesspool that it is today.
I joined one year ago and I’m slowly refactoring the codebase and updating it to get it to a more modern codebase, cleaner code, faster load times and creating a ton of dev documentation so the devs in India can start getting into best practices and start producing quality code.4
-
- just turned my keyboard over and gave it some gentle taps for the first time in three and a half years of working here..
I wish I had not done this.
-
- Reading Python code written by Java programmers. I have no words to describe the horror. CamelCase everywhere...2
-
-
- If it's an open source project and it (shock horror) actually gets a decent userbase, expect it to suck up *way* more time than you originally intended.
- Let me tell you a tale, children. Of how one of the mostly ghastly, horrid pieces of software currently on this earth came to be in its current, pitiful state.
It all began on January 28th, 2015.
On that day, Tim Cook, CEO of Apple, sat leisurely in his office. He had just finished watching a live stream for a conference held by Facebook.
Minutes after the stream ended, he quietly sat in his chair, pondering over what was just shown.
The whole keynote was well done, he thought. But something about it just didn’t sit right with him. It was one specific line uttered by one of the keynote speaker that bothered him.
“React Native will help developers easily write code that will work on both iOS and Android”.
Out of all the talking done throughout that conference, it was that sentence, in particular, that stuck out like a sore thumb t Cook.
Those words began to echo in his head. “...Android”, Tim muttered to himself, gritting his teeth.
He immediately grabbed his Iphone from his pocket, and called the Technical Director of Xcode.
On the phone, the two discussed Xcode as it pertained to Facebook’s latest tool.
“Now, I’m not saying that we shouldn’t provide any support for React Native”, Cook told the director; “Just make it a bit more inconvenient for anyone using React, that’s all”.
The director thought his boss was nuts. Why on earth would you want to intentionally make using an IDE as painful of an experience as possible? But the technical director also knew that, more importantly, he wanted to keep his job.
“...We’ll do our best to make it a total pain in the ass to use React Native in Xcode”, the director told his boss with a shrug.
And so began one of the sickest jokes ever played on developers. A joke so twisted and cruel, it would make even the creator of PHP gasp in abject horror.
Who knew that someone would go out of their way to create an IDE that doesn’t even bloody work half of the time.
And don’t get me started on the absolute piss poor excuse for documentation this thing has.2
- Was called on site because my client was having a problem with a system I developed years ago, a LAN based management system. The horror came when I asked for the most recent backup. It was only two years old..
- My first freelance dev job thing, turned out alright. For the first year though, the dev job thing became a tech support thing. Oh the horror.
- Finally found the place where other deva can relate to the horror/fear of posting a question on StackOverflow1
-
- Stack Overflow is like a re-run of the Milgram experiment. Give a bunch of devs authority over their peers and watch the horror unfold.
Think I'll nip over there and ask what the best JavaScript framework is just to stir them up.5
-
-
- I wanna rant about a freelance job but I can't. Tell me your freelancing horror stories instead :D7
- Anyone ever had exams, where teacher tell you that if you study on said part of the course and you'll pass, then got completely blindsided on the actual exam... Oh the horror 😂6
-.
- I watched the movie 'happy deathday' today. It really isn't a horror movie but I was thinking of version control the whole time.
The point where she wakes up all the time is the master branch and the different days she experiences are different branches.3
-
-
- My original project has morphed and twisted to become this monster where I’ll need to learn PHP, databases, and somehow get a desktop C# app to read the remote database and execute tasks based upon times in the database. God help me.4
-
- That cloud of dread that hangs over your weekend because you deployed just before five last night.2
-
- Three syntax elements, pixels on screen.
By Unknown (for privacy), 2021
In this installation, the Author's desire to prove the whole world that stupidity is achievable with just 2 syntax elements is... self-evident!
Observe! The finely crafted letters composing this installation in their beauty! While the middle element is purely a distraction (one could argue it's there to be sure a critical issue doesn't happen even if the default value is already `true`), the sides of the installation reveals the true horror.
As the vision of the observer is attracted to the center, the peripheral vision sends the informations to the subconcious, making the observer slowly realize both that the Author willingly compiled `.less` files with postcss and that .less files are in the css folder, proving that stupidity is demonstrable in just two syntax elements.
A masterpiece.
- have dabbled with Linux quite a few times in the past (dual booting with Windows) and I'm looking to get into it again. Any recommendations? Any horror stories?
So far I've daily driven Ubuntu, Mint, Manjaro, and CrunchBang. KDE and Gnome are my go-to DEs.33
- What’s a really expensive MySQL Query? I need to add it to a CMS application for “research purposes”.17
- I really don't understand this particular Government Department's IT Unit. They have a system and network to maintain except:
- They don't have a DBA
- They don't have a dedicated Network Engineer or Security Staff
- Zero documentation on all of the systems that they are taking care of (its all in each assigned particular staff's brain they said)
- Unsure and untested way of restoring a backup into a system
- Server passwords are too simple and only one person was holding this whole time and its to an Administrator account. No individual user account.
- System was developed by an in-house developer who is now retired and left very little documentation on its usage but nothing on how its setup.
But, the system has been up and operational for the past 20 years and no major issues whatsoever with the users using it. I mean its a super simple system setup from the looks of it.
1 App Server connected to 1 DB Server, to serve 20-30 users. But it contains millions of records (2GB worth of data dump). I'm trying to swing to them to get me on a part time work to fix these gaps.
God save them for another 20 years.3
- worst dev tech ever for me was intel x86 assembler. I developed on motorola 68k before and i loved it. x86 was horror.1
-
- I JUST typed a huge-ass happy post on LinkedIn and right after I hit the "Post" button, my feed got refreshed and there was no sign of my post getting published... Wtf 😡
NOT typing it all again. Nope
-
-
-
- Ok the ionic datetime component horror ended. I got it working good enough but I'm not proud of my code. The whole thing is a giant hack parsing dates to and from strings, switching locales, setting months and days and using the month as a daypicker as this fucking component does not allow me display day names in it, only month names. Such a mess... at least now I can work on the stuff that matters.
Actually though about making this open source... I reconsidered.8
-
- There are people I've seen myself multiple times; who quit vim/vi using Ctrl+Z . I was lucky enough to ask one person to just run "jobs". That was horrifying.
I just needed to share this somewhere ...9
- !Not Rant
I'm so hopeful. It's actually comedic.
Short backstory catch-up. I started working with an *actual* huge firm.
And unlike my other horror experiences with huge monopoly firms, this one is actually chilled out. So different that it seems almost like a startup.
Idk how tf they preserved this dynamic but I literally like everybody in this team of twenty-ish individuals. In fact I somehow even look upto some of them.
Hope this stays up and I might be locked in for a few more years.1
-
- 1) Miss Ludum Dare 40 because you remembered it is on one day after it started.
2) See Ludum Dare 41 is this week on time.
3) Lack the time to do it.
4) Commit suicide.2
-
- I am going to watch The Rocky Horror Picture Show now. Is it wayyy too early, or are ALL times good times to watch it?
Please choose only 1.0000000000000001 options.7
- If y'all thought Emojicode was bad, take a look at this magnificent horror that mixes emojis and SKI combinators
-.
-
- When the PM has been letting a fresh faced graduate loose on a codebase without any code reviews and you come back to some cronenburg level horror in your now crippled project. But it LOOKS like the mock ups.... * internal screaming *
-
- Writing a horror screenplay. It starts off with a ringing phone. The person answers, and it's their mum saying "I have a computer question."2
-
- Got offered great opportunity to learn x86 for Linux from someone with years of experience. Happily accepted. Later realized this is eerily similar to the beginning of a horror movie.7
-.
-
- Gonna be applying for college. Gonna major in programming.. I've always read the horror stories.. I hope my teachers arent retarded and dont make me use shit software..
Any advice?..3
-!
- spent too much time on devrants today and my oh my the horror 🤣
Today, i'm not interested in software development anymore, let's see what tomorrow brings.4
- I am a web-dev wannabe marketing person. I was locked up in mental ward twice because I often get paranoid way too much about security issues that might never happened to me. Last time I even refused to use hexcode because I believed that my computer is being used for someone's crypto mining. I am still scared of node.js,CDN and googlefont or TIFF, pdf files, and many other things that I don't understand perfectly.
It's always breathtaking, cliffhanging, and thrilling session when I'm working on something with my computer. My heartbeat gets faster, my palms gets sweaty when I start to type <script>. It's like when you watch horror movie, or wearing seatbelt on roller coaster before the session begins. You are frightened but excited at the same time. 🤤!
-
-...1
- When we had to do our first html5 exam at school... On paper... Written in CAPS... #trowback #horror #worstteacherever2
- Why is it so easy to just keeping thinking to yourself, I really want to do this project. Then literally never want to actually touch it, so many great ideas filling up multiple text files strewn between devices. Stupid motivation.
-!
-
- everybody nags about php inconsistencies, but have you tried excel vba scripting and adressing a1 vs r1c1 notation regarding ranges and names? pure horror.2
-
- Books, mostly. Never completed even one though. Started with VB6 a long time ago. Just wanted to create cool stuff. Then moved onto Java. Heck, even did J2ME. **shudders with horror** Now doing Android and Python mostly.
-
- Anybody has an opinion on CMU for a machine learning or robotics PhD? You think they'll let me in? (I've heard horror stories from their selection process tbh)
Also, any good Canadian unis and degrees for AI/robotic combo Ph
- I get that this was a small team when they developed this and there was no coding guidelines...
But, using spaces and tabs in the same file is just unacceptable!!18
-
-
- Finished my job, now making a WordPress theme for a freelance project.. Ahhhh.. A good horror and a cold beer awaits me this evening..1
- Since this has been trending recently, here's my six word horror story -
"I accidentally deleted the production database"2
- The one single time i succesfully remember my dream from the previous night it just had to be being in chased through a spooky building by this weeks sprint.2
- I wish I could show y'all my fav youtuber, the dude's youtube name is Dross. He makes entertaining videos about horror, some of them are not legit at all, but still entertaining to watch. The issue is that they are in Spanish.
It sucks.
-.
- keep hearing horror stories about react native, why is that? i just started learning it, is that bad? is it worth to learn it
-
- How on earth is there any "sane" software (eco-)system or will it always be so crazy because as pieter hintjens might have said all this soft- and hardware is created by this social animal called human, with all it's faults and aberrations...
So it was just, that I could not print - probably because of this bug:... - couldn't install a newer ghostscript. So I would scp my files inside an Ubuntu-VM from which I could print. Sometimes I could pdf2ps some files or transfer back the ps-file and print on my host machine, but mostly not... U n t i l today when I installed the fucking debug symbols package for ghostscript and I could just fucking print. Heisenbug, ghost error or what
-.
-
- So, some friends of mine are going to work on a horror game in Unity2019.
Does anybody have software recommendations for audio editing, shader development or such?
Any advice greatly appreciated.9
- I realize that my story made no sense so Im back to square one.
The story was suppose the playerz an option on how they want the outcome to be but I wanted it to show the horror of a straight storyline. Rip
-?2
-
- I'm easily a Digital Oceans fans, though I have heard horror stories, so I might set up a system to do regular backups.
I'm considering migrating my current server to something FreeBSD based, so I can easily do ZFS snapshots, and even code on my machine at home and just send the jail as a snapshot. Like docker, but different.5
-
- My internet connection is so messed up. Again certain websites are not loading on my Mac but they are loading on my phone using WiFi. I tried clearing cookies, flushing DNS cache and changing DNS servers to OpenDNS or Google DNS.8
- Holy shit my dream last night was the best I had in years. I was basically in this open world (just real life) kinda thing that felt a lot like DayZ with the atmosphere (just no zombies). We we're a lot of friends just running about. But it was also kinda an exploration dream where I went into this bunker on the airfield (I guess) and then it was more like an action horror game where I had to shoot the most disgusting creatures. Except one monster was a cute girl (yeah I don't know). The dream then shifted to cuddling and making out with this super cute girl in bed
Oh man. This dream had it all and it was crystal clear the whole time, it was just amazing
Sorry, not sorry for reading this lol7
- .1
- Found this in a shell script. Instead of just one regex, why not use grep and sed, even though you could have just done it all with sed!
IMAGE_TAG=`grep defproject project.clj | sed -e 's/^.* \"//' -e 's/\"//'`2
- Git starts merging changes to your private personal repository is scarier than seeing someone else in the mirror in the empty room
-
- Not my co-worker, but please someone kill this guy!!
WARNING: INDENTATION HORROR AHEAD.1
- SHUT IT DOWN !
SHUT IT DOWN FOREVER !
APPARENTLY THE DREAM OF INSPIRING THE NEXT GENERATION TO VIEW THIS ALL AS HORROR DIDN'T SET IN DESPITE JENNIFER CONNELLY AND HER FINE TITS MAKING AN APPEARANCE AND I MADE THIS JOKE TWICE NOW !6
- MySQL tables named haphazardly, table column names in mixed cases, redundant columns and tables, the horror
- Not a horror. I'm rewriting services.
It started as a help request. I was asked to help with completing a service dealing with push notifications which was a research prototype. It was suggested to keep core part of it, but it was so awful that I just removed all files and wrote the service from scratch.
The second service had been developed for more than a year by a junior and then by our manager who wanted to complete it as fast as possible, without taking care of code quality. Then I was asked to take over the project and after some time I agreed with one condition: I'll have 1 month on takeover. But when I looked at the code, it became clear that it's much faster and better to rewrite everything except API and database than to takeover existing code.
The third service dealing with file exchange was working, but the junior who wrote it advised to rewrite it because it was a very simple service. So, I initiated rewriting, designed a new API and reviewed the final result.
And now I'm dealing with the fourth one. It was developed in my team but not under control. Now, when I "inherited" this complicated project, I decided to rewrite it because it should be simple, but it doesn't. It features reflection, layers inside layers, strange namespaces, strange solution structure. And that's after months of refactorings and improvements. So, wish me luck because I want to keep part of the infrastructure, but I don't know if it's possible.
-
-
- Is WordPress' use of God objects really such a big problem? I mean, sure wp_query is used for every possible purpose and is the most mutated piece of horror every. But what is the harm?1
- Gasping in horror when an assignment begins with Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace...
-
- That moment when you get started as a junior frontend and end up having to design a couple databases. I dont know shit about that. 😱 also in between I happen to become sort of unofficial IT staff in this music school. I'm confused. Not a proper rant, but just wtf? At least I'm getting money... someday. Maybe. 😲
At least I'll have my voice and piano lessons covered. 🌟2
- If it were possible i would make windows punish itself by playing this: . But this goddamn piece of garbage software refuses to connect to my Ian-network since an hour ago. Why? I have no fucking idea. There are a switch and a repeater between the router but that never stopped wanblows before. And the devices located deeper in my network topology are fine, i also reset everything from the physical connection to "network adapters" in the UX horror they call system settings.
And i'm pretty sure it'll work again in an hour or so for no appearent reason just so this steaming pile of shit code could ruin my afternoon.
-
-!
- The others!
Sounds like an horror movie XD
Aniway in animation the motto is "animation is concentration", but i think all work are
P.s. sorry for my bad English
- Legacy code horror story.
I had to work on code designed to identify dogs... didn't go well:
(Credit to reddit:...)
- VS Code is a horror. Every other editor I just picked up and it ran. VS errors out on obscure demands again and again and again. I don't want to spend time learning this POS when I'm learning Julia. What's horrible is Julia developers, such as in Juno are abandoning their own editors to go to VS Code, which is antithetical to the whole idea of Julia - to a be easy to use and replace multiple languages. They abandoned Juno for a hard to use editor whose only feature is multiple languages.5
- I know liquid-nitrogen-powered air cans heat up after getting cold during use, and boil in the process, extending use time, but do they really need to make disturbing noises that sound like a clicking creature in a horror movie???
-
-3
Top Tags | https://devrant.com/search?term=horror | CC-MAIN-2021-39 | refinedweb | 6,871 | 72.87 |
This is an in-depth, behind-the-scenes look at strings in the Common Language Runtime and the .NET Framework. This study provides detailed information of the implementation of strings, describes efficient and inefficient ways of using strings.
I plan on developing a series of articles, exploring the underlying implementations of various fundamental features in C# and the framework, of which this is the first article, if The Code Project community finds this information helpful and offers any additional suggestions to this piece.
Most of the information presented here cannot be found anywhere else, not in MSDN, not in any book, and not in any article. My intent was to understand how to use C# efficiently to develop a serious commercial application. I am ex-Microsoft developer of Excel, who has started his own software company, developing applications employing artificial intelligence.
Strings, as you know are a fundamental type, in the .NET world. They are one of a small set of exclusive objects that the CLR and the JIT compiler have intimate knowledge of. The others include, of course, the primitive data types, StringBuilder, Array, Type, Enum, Delegates and a few other Reflection classes such as MethodInfo in the CLR.
StringBuilder
Array
Type
Enum
Delegates
Reflection
MethodInfo
In .NET version 1.0, each heap-allocated object consists of a 4 byte objectheader and 4 byte pointer to a method table. The 4 byte object header provides five bit flags, one of which is reserved for the garbage collector to mark an object as reachable and not free space. The remaining bits refer to a 27 bit index called a syncindex, which may refer to another table. This index has multiple purposes: As implied by its name it is used for synchronization, whenever the "lock" keyword is used. It is also used as the default hash code for Object.GetHashCode(). It does not provide the best distribution properties for a hash code, but it meets the minimum requirements for a hash code for reference equality. The syncindex remains constant throughout the life of the object.
Object.GetHashCode().
The String class and the Array class (including descendant classes) are the only two variable length objects.
String
Array
Internally, strings resembled OLE BSTRs - an array of Unicode characters preceded by length data and followed by a final null character. A string occupies additional space as it includes the following members in order. Later, I shall explain how to access and change some of these internal variables, with and without using reflection.
BSTRs -
int
UInt32.Max
char
Strings are always appended with a null character, even though it is valid for a string to contain embedded. This facilitates interoperability with unmanaged code and the traditional Win32 API.
Altogether strings occupy 16 bytes of memory + 2 bytes per character allocated + 2 bytes for the final null character. As described in the table the number of characters allocated may be up to twice the string length, if is used to create the string.
Closely related to Strings is the StringBuilder class. Although StringBuilder is buried within the System.Text namespace, this class is not an ordinary class, but one that is specially handled by the runtime and the JIT compiler. As a result it may not be possible to write an equivalent StringBuilder class that is as efficient.
String
StringBuilder
System.Text
StringBuilder will construct a string object (which, you thought, were immutable) and modify it directly. By default, StringBuilder will create a string of 16 characters. (However, it might have been more appropriate to use an odd number like 15, since the string will be allocated with space for a null character wasting space for another character that would otherwise not be used because objects aligned on 4-byte boundaries when created.)
string
If the string needs to grow beyond its capacity, a new string will be constructed with the greater of twice the previous capacity or the new desired size, subject to the maxcapacity constraint. This approach takes O(3n) which is linear time. The alternative approach of growing the string by a fixed amount rather than a percentage results in quadratic time performance, O(n^2).
When StringBuilder.ToString() returns a string, the actual string being modified is returned. However, if the capacity of the string exceed more than twice the string length, StringBuilder.ToString() will construct a new compact version of the string. The same string is returned with multiple calls to ToString().
StringBuilder.ToString()
StringBuilder.ToString()
ToString()
Modifying the StringBuilder after a ToString() results in copy-on-write; an entirely new copy of the string being used, so as not the change the previously returned immutable string.
StringBuilder costs 16 bytes not including the memory used by the string. However, the same StringBuilder object can be reused multiple times to create several strings by setting Length to 0 and Capacity to the desired size, so the cost of a StringBuilder is incurred just once.
As you can see, creating Strings using StringBuilder is an extremely efficient operation.
Other performance tips:
Concat(a,b,c)
MaxCapacity
Utilizing StringBuilder to construct strings can significantly reduce allocations. However, a number of quick tests have demonstrated that even full garbage collection can occur in a fraction of a second -- an imperceptible amount of time. It may not be worth avoiding garbage collection without profiling the application first. On the other hand, frequent garbage collection could be detrimental to performance. I have sometimes noticed a short unexplained pause in .NET application; it's hard to say if it is due to the JIT compiler, the garbage collector or any other factor. Many old-fashioned applications, such as the Windows shell, Word, and Internet Explorer have unexplained pauses as well.
.NET uses a three-generation approach to collecting memory, based on the heuristic that newly allocated memory tends to be freed more frequently than older allocations, which tend to be more permanent. Gen 0 is the youngest generation and, after a garbage collection, any survivors go on to Gen 1. Likewise, any survivors of a Gen 1 collection go on to Gen 2. Usually garbage collection will occur only on Gen 0, and only if after it has reached some limit.
The cost of allocating memory on the heap under garbage collection is less than that under the C runtime heap allocator. Until memory is exhausted, the cost of allocating each new object is that of incrementing a pointer--which is close to the performance of advancing the stack pointer. According to Microsoft, the cost of a garbage collection of generation 0 is equivalent to a page fault--from 0 to 10 milliseconds. The cost of a generation 1 collection is from 10 to 30 ms, while a generation 2 collection depends on the working set. My own profiling indicated that generation 0 collections occur 10-100 times more frequently than generation the other two.
One book by Jeffrey Richter that I have read suggested hypothetical limits of 250Kb for Gen 0, 2Mb for Gen 1 and 10Mb for Gen 2. However, my own investigation into the Rotor shared source CLI indicated that the threshold appears to be initially 800Kb for Gen 0 and 1Mb for Gen 1; of course, these are undocumented and subject to change. The thresholds are automatically adjusted dynamically according to actual program allocations. If very little memory is being freed in Gen 0 and survives to Gen 1, the threshold is expanded.
Many of the functions provided by the String class often generated needless allocations that increase the frequency of garbage collections.
For example, the functions ToUpper and ToLower will generate a newly allocated string, whether or not any changes were actually made to the string. A more efficient implementation would return the original immutable string. Likewise, Substring will return a new string as well even if the entire string or an empty string is returned. It would have been more optimal for String.Empty to be return in the latter case.
ToUpper
ToLower
Substring
String.Empty
It is very difficult, if not impossible, to escape the numerous hidden allocations occurring with the class library. For example, whenever any numeric data type such as int or float is formatted as string (for example, through String.Format or Console.WriteLine), a new hidden string is created. In such cases, it is possible but inconvenient to write our own code to format strings.
String.Format
Console.WriteLine
When using Console.WriteLine with a value type, the value type object will be boxed and then converted to a string, resulting in two allocations. For example, Console.WriteLine(format, myint) is effectively equivalent to Console.WriteLine(format, ((object)myint).ToString()). You can save an extra allocation by calling ToString explicitly with Console.WriteLine(format, myint.ToString()). Since WriteLine includes overloads for common primitive types, it's more of an issue for custom valuetypes or calls to WriteLine with many arguments.
Console.WriteLine(format, myint)
Console.WriteLine(format, ((object)myint).ToString())
ToString
Console.WriteLine(format, myint.ToString())
WriteLine
Other parts of the library exhibit these inefficiencies, too. In the Windows Forms Library, for example, the Control Text property always returns a new string; this may actually be understandable, because the property may not be cached and therefore it must call into Windows API with the function GetWindowText to retrieve the value of the control.
GetWindowText
GDI+ is the worst abuser of the garbage collector, because a new string must be constructed for each call to MeasureText or DrawText.
MeasureText
DrawText
One good string function is the GetHashCode, which produces an integer code with a very good distribution and takes into account every character in its code.
GetHashCode
public static unsafe void ToUpper(string str)
{
fixed (char *pfixed = str)
for (char *p=pfixed; *p != 0; p++)
*p = char.ToUpper(*p);
}
The example above demonstrates how to change an immutable string through the use of unsafe pointers. A good example of efficiency gained through this approach with is str.ToUpper() which returns a newly constructed string which is the uppercase version of the original string. An entirely new string is created, whether any changes were made or not; with the code above, the same string is modified in place.
str.ToUpper()
The fixed keyword pins the string in heap so that is cannot move during a garbage collection and allows the address of the string to be converted to a pointer. The new address points to the start of the string (or if an index is included as in &str[index], to the location referred to by the index), which is guaranteed to be null-terminated.
&str[index]
The functions below provide a more complete set of functions for modify a string, without reflection.
public static unsafe int GetCapacity(string s)
{
fixed(char *p = s)
{
int *pcapacity = (int *)p - 2;
return *pcapacity;
}
}
public static unsafe int GetLength(string s)
{
// This function is redundant, because it accomplishes
// the same role as s.Length
// but it does demonstrate some of the precautions
// that must be taken to
// recover the length variable
fixed(char *p = s)
{
int *plength = (int *)p - 1;
int length = *plength & 0x3fffffff;
Debug.Assert(length == s.Length);
return length;
}
}
public static unsafe void SetLength(string s, int length)
{
fixed(char *p = s)
{
int *pi = (int *)p;
if (length<0 || length > pi[-2])
throw( new ArgumentOutOfRangeException("length") );
pi[-1] = length;
p[length] = '\0';
}
}
public static unsafe string NewString(int capacity)
{
return new string('\0', capacity);
}
public static unsafe void SetChar(string s, int index, char ch)
{
if (index<0 || index >= s.Length)
throw( new ArgumentOutOfRangeException("length") );
fixed(char *p = s)
p[index] = ch;
}
Modifying the values will indeed change the string. If strings can be change, why then are the immutable? The fact that strings are immutable allows them to be as easily passed as around as regular integers. Strings can be initialized with null and can be blitted from one structure to next, without having to use an elaborate mechanism for copy-on-write.
Capacity refers to the array length of the string, which cannot be changed. However, the logical string length can be changed, by referring to the previous 32-bit integer just before the string. However, examining or modifying this value requires caution. The string length must always be less then array length of the string. The high two bits contains flags that indicate whether the string consists entirely of 7-bit ASCII text, in which case in can be sorted and compared quickly. A value of zero for both of these bits indicate that the state is indeterminate. When reading the length value, the length must be anded with 0x3fffffff to avoid reading the two bits. Modifying the length is okay as long as the high two bits are clear, which is the normally the case.
Future implementations or current non-Windows implementations of the CLR may change the underlying implementation of the string. Fortunately, the version of the runtime that you built with will be the one selected when your executable is started.
System.Reflection also allows programmers to access hidden fields, members, and properties--yes, even those marked with internal and private. This reflection-based approach performs much more slowly than the manual approach above, but does not require unsafe code and should be more robust in the face of changing versions of the runtime. The line demonstrates the ability to change the length of an immutable string through reflection:
typeof(string).GetField("m_stringLength",<BR>BindingFlags.NonPublic|BindingFlags.Instance).SetValue(s, 5);
I have written a short test function to demonstrate the use of the various string functions described above.
/// <SUMMARY>
/// The main entry point for the application.
/// </SUMMARY>
[STAThread]
static unsafe void Main(string[] args)
{
StringBuilder sb = new StringBuilder();
sb.Append("Good morning");
string test = "How are you";
SetLength(test, 5);
SetChar(test, 1, 'a');
string [] sarray = new String[] { String.Empty, "hello", "goodbye",
sb.ToString(), test};
foreach (string s in sarray)
Console.WriteLine("'{0}' has capacity {1}, length {2}", s,
GetCapacity(s),
GetLength(s));
Console.ReadLine();
}
The test code above results in the following output, demonstrating the full ease at which strings can actually be changed.
'' has capacity 1, length 0
'hello' has capacity 6, length 5
'goodbye' has capacity 8, length 7
'Good morning' has capacity 17, length 12
'Haw a' has capacity 12, length 5
For the faint-hearted, who prefer a less intimate, less low-level interaction with strings. It's possible to recover the actual string that is being used by StringBuilder. Through reflection, we can gain access to the hidden internal m_StringValue variable.
string test = (string) sb.GetType().GetField("m_StringValue", BindingFlags.NonPublic|BindingFlags.Instance).GetValue(sb);
The returned string "test" will continue to reflect changes made within StringBuilder, unless a modification requires more capacity than the string holds. It which case StringBuilder abandons the original string and constructs an entirely new larger working string.
One caveat is that StringBuilder will never reduce the capacity of the string, without a call to ToString. ToString basically ensures that the current string length is at least half the capacity. Without it, the current string will only grow, never shrink, always staying at its peak size.
ToString
A future article on "Objects UNDOCUMENTED" will demonstrate how to access hidden members without using reflection.
An alternative to constructing Strings without impacting the garbage collector is to do one of the following:
All of these alternatives use unsafe features, which have no real implications if you are writing a standalone application. Keep in mind, managed C++ is unsafe. I don't really recommend these approaches, but they are mentioned for your awareness.
1) stack-based character array
char *array = stackalloc char[arraysize ];
2) special struct for fixed-sized strings
[StructLayout(LayoutKind.Explicit, Size=514)]
public unsafe struct Str255
{
[FieldOffset(0)] char start;
[FieldOffset(512)] byte length;
#region Properties
public int Length
{
get { return length; }
set
{
length = Convert.ToByte(value);
fixed(char *p=&start) p[length] = '\0';
}
}
public char this[int index]
{
get
{
if (index<0 || index>=length)
throw(new IndexOutOfRangeException());
fixed(char *p=&start)
return p[index];
}
set
{
if (index<0 || index>=length)
throw(new IndexOutOfRangeException());
fixed(char *p=&start)p[index] = value;
}
}
#endregion
}
Str255 is a stack-allocated value type that allows string operations to be performed without impacting the garbage collector. It can handle up to 255 characters and includes a length byte.
A reference to the structure can be directly passed to a Windows API call, because the start of the structure is the first character of a C-string. Of course additional functions need to be written by editing strings. Careful attention would also be needed to ensure that the string is null-terminated for Windows interoperability.
Also a conversion routine would be necessary to convert the struct to a .NET string, so that others CLR functions can use it. If this is used to create a .NET string, it would be superior to StringBuilder in one respect: Str255 would require fewer allocations.
Indexing an array or string normally includes range-checking. According to Microsoft, the compiler performs a special optimization that improves the performance of iterating through an array or string. First, compare the following three approaches to iterating through a string. Which is fastest?
1) standard iteration
int hash = 0;
for (int i=0; i< s.Length; i++)
{
hash += s[i];
}
2) iterative loop with saved length variable
int hash = 0;
int length = s.length;
for (int i=0; i< length; i++)
{
hash += s[i];
}
3) foreach iteration
foreach (char ch in s)
{
hash += s[i];
}
Surprisingly, with NET v1.0 of the JIT compiler, the first example, which repeatedly calls string.Length, actually produces the fastest code, while the third "foreach" example produces the slowest. In later versions of the compiler, the foreach example will be special-cased for strings to provide the same performance or better as example 1.
string.Length
foreach
Why is example 1 faster than example 2? This is because the compiler recognizes the pattern for (int i=0; i<s.length; i++) for both strings and arrays. Strings are immutable; they have constant length. Since both strings and array are fixed lengths, the compilers simply stores away the length, so that no function call is made on each iteration. (The JIT compiler, which automatically inlines non-virtual method calls that consist of simple control flow and less than 32 bytes of IL instructions, may actually be inlining references to the string length.)
In addition, the compiler eliminates all range-check tests on any instance of s[i] within the loop, because i is guaranteed in the for condition to be within the range 0 <= i < length. Normally, any indexing of a string results in a range-check being performed; this is why attempting to save time by stashing away the length variable in example 2 actually results in slower code than in example 1.
length
NOTE: In the version 1.1, the C# team special cases foreach to behave like the first loop, so there should be no difference.
There is a fourth approach that should be even faster, although I have not verified it; however, it does require unsafe code to be emitted.
fixed (char *pfixed = s)
{
for (char *p = pfixed; *p; p++)
hash += *p++;
}
C# supports switches on strings. In doing so, it uses an efficient mechanism for switching.
public void Test(string test)
{
switch(test)
{
case "a": break;
case "b": break;
...
}
}
For small number of cases, the compiler will generate code that looks at the internal string hash table. The compiler will first call String.IsIntern on the switch value. String.IsIntern actually returns a string instead of boolean, returning null for failure, and the interned value of the string for success.
Every string constant in a case label is automatically known at compile-time and is therefore guaranteed to be intern. At that point, the compiler will compare the interned value of the string to each string constant using reference equality.
public void Test(string test)
{
object interned = IsInterned(test);
if (interned != null)
{
if (interned == "a")
{
; // Handle case a
}
else if (interned == "b")
{
; // Handle case b
}
}
}
Here is the sample IL for this code.
.method public hidebysig instance void Test(string test) cil managed
{
// Code size 34 (0x22)
.maxstack 2
.locals ([0] string CS$00000002$00000000)
IL_0000: ldstr "a"
IL_0005: leave.s IL_0007
IL_0007: ldarg.1
IL_0008: dup
IL_0009: stloc.0
IL_000a: brfalse.s IL_001f
IL_000c: ldloc.0
IL_000d: call string [mscorlib]System.String::IsInterned(string)
IL_0012: stloc.0
IL_0013: ldloc.0
IL_0014: ldstr "a"
IL_0019: beq.s IL_001d
IL_001b: br.s IL_001f
IL_001d: br.s IL_0021
IL_001f: br.s IL_0021
IL_0021: ret
}
If the number of cases is large (in this example it was 14), the compiler generates a sparse hashtable constructed with a loadfactor of .5 and with twice the capacity. [[ In actuality, a hashtable of with a loadfactor of .5 is generated with nearly a 3-1 ratio, since the Hashtable will try to maintain keep make sure that the ratio of used buckets to available buckets in the table is at most (loadfactor * .72), where the default loadfactor is 1. The magic number .72 represents the optimal ratio to balance speed and memory as determined by Microsoft performance tests. ]]
Hashtable
The hashtable will map each string to a corresponding index as shown in the C# illustration of what the compiler generates below.
private static Hashtable CompilerGeneratedHash;
public void Test(string test)
{
if (CompilerGeneratedVariable==null)
{
CompilerGeneratedHash=new Hashtable(28, 0.5);
CompilerGeneratedHash.Add("a", 0);
CompilerGeneratedHash.Add("b", 1);
...
}
object result = CompilerGeneratedHash[test];
if (result != null)
{
switch( (int) result )
{
case 0:
case 1:
...
}
}
}
The actual IL version is shown below.
.method public hidebysig instance void Test(string test) cil managed
{
// Code size 373 (0x175)
.maxstack 4
.locals ([0] object CS$00000002$00000000)
IL_0000: volatile.
IL_0002: ldsfld class [mscorlib]
System.Collections.Hashtable '<PRIVATEIMPLEMENTATIONDETAILS>
'
::'$$method0x6000106-1'
IL_0007: brtrue IL_0100
IL_000c: ldc.i4.s 28
IL_000e: ldc.r4 0.5
IL_0013: newobj instance void [mscorlib]
System.Collections.Hashtable::.ctor(int32, float32)
IL_0018: dup
IL_0019: ldstr "a"
IL_001e: ldc.i4.0
IL_001f: box [mscorlib]System.Int32
IL_0024: call instance void [mscorlib]
System.Collections.Hashtable::Add(object, object)..... object)
IL_00f9: volatile.
IL_00fb: stsfld class [mscorlib]
System.Collections.Hashtable '<PRIVATEIMPLEMENTATIONDETAILS>
'
::'$$method0x6000106-1'
IL_0100: ldarg.1
IL_0101: dup
IL_0102: stloc.0
IL_0103: brfalse.s IL_0172
IL_0105: volatile.
IL_0107: ldsfld class [mscorlib]
System.Collections.Hashtable '<PRIVATEIMPLEMENTATIONDETAILS>
'
::'$$method0x6000106-1'
IL_010c: ldloc.0
IL_010d: call
instance object [mscorlib]System.Collections.Hashtable::get_Item(object)
IL_0112: dup
IL_0113: stloc.0
IL_0114: brfalse.s IL_0172
IL_0116: ldloc.0
IL_0117: unbox [mscorlib]System.Int32
IL_011c: ldind.i4
IL_011d: switch (
IL_0158,
IL_015a,
IL_015c,
IL_015e,
....
IL_0170)
IL_0156: br.s IL_0172
....
IL_0174: ret
}
The next version of the C# compiler will be introducing a number of improvements for strings. Strings will have an improved hashing function with better distributional properties, so you don't want to rely on the current behavior.
Additional new functions in the string classes include the following: the static method IsNullOrEmpty, functions to convert to lower or uppercase in the invariant culture (ToLowerInvariant, ToUpperInvariant), and additional string normalization functions (Normalize, IsNormalized). Hashtables now directly support in the constructer several forms of case-insensitive string searches.
IsNullOrEmpty
ToLowerInvariant
ToUpperInvariant
Normalize
IsNormalized
Hashtables
These are in the current pre-release versions of Whidbey available, but there will likely be more changes prior to beta.
This concludes my discourse on strings for now. I will continue update this article with new source code and actual benchmarks in the future. Be sure to watch for update versions of this page.
As a result of the enthusiasm that this article has generated, I will continue to develop more UNDOCUMENTED articles. My next article will be a discussion of the implementations of arrays and collections. I hope to publish a couple dozen articles when I am done with the series.
My sources include various books Microsoft publishes, the shared source CLI, MSDN, magazine articles, interviews, inside sources, developer conference presentations, and good old disassembly. All of this behind-the-scenes information takes some amount of work to research and obtain, so, if you enjoyed this article, don't forget to. | https://www.codeproject.com/Articles/3377/Strings-UNDOCUMENTED?fid=13838&df=10000&mpp=10&sort=Position&spc=Relaxed&tid=3265826 | CC-MAIN-2017-26 | refinedweb | 4,016 | 56.05 |
If you’ve been following the site, you may have noticed the VR Baseball game. While this was an interesting project, there was one part that stood out as particularly worthy of writing about. In this article, I’ll show you how to setup physics in your own baseball, cricket, or other batting style game in Virtual Reality. We’ll use a simple trick to make the batting feel real and fun.
Video Version
The Game
In VR Baseball, the goal is simple. A ball is pitched, you swing the bat, and if you hit, the ball goes flying.
The Problem
The first implementation of VR Baseball had a very simple batting system. There was a colider on the bat and the ball had some bounciness. Choosing a level would just adjust the bounciness factor of the ball by switching out the PhysicsMaterial on the balls Collider. It was simple, it worked, but it felt weird. The main issue was that no-matter where on the bat you hit, the ball flew the same speed.
You could hit it with the handle, it’d go flying. You could push forward with the bat and get a home run off the tip. The only thing that was taken into account was the velocity of the bat at the time.
Now in most situations, you don’t need to worry about the velocity of the exact part where your collisions happen, only the general velocity of the object it’s colliding with. But with baseball, we have a swinging action, and we need the velocity at the end of the bat to be different from the velocity at the handle (just like real life).
The Solution
Instead of having the baseballs collide with the bat, we spawn some new objects to collide with.
The bat has a set of child objects that determine where these new objects will spawn and be at run-time.
As you can see, the BatCapsules don’t have much to them. They’re really just a script and a transform. There’s a collider and a mesh renderer, but those are only for debugging and visualizing, which is why they’re disabled normally (I would toggle them on when the follower seemed to be in the wrong spot, so I could verify the location I was giving them was valid).
The most important part though is the “Bat Capsule” script on them.
using UnityEngine; public class BatCapsule : MonoBehaviour { [SerializeField] private BatCapsuleFollower _batCapsuleFollowerPrefab; private void SpawnBatCapsuleFollower() { var follower = Instantiate(_batCapsuleFollowerPrefab); follower.transform.position = transform.position; follower.SetFollowTarget(this); } private void Start() { SpawnBatCapsuleFollower(); } }
The method SpawnBatCapsuleFollower() does exactly what its name implies. It spawns a BatCapsuleFollower, and calls a single initialization method named SetFollowTarget. We pass “this” into SetFollowTarget() as we want the BatCapsuleFollower to follow this object which is attached to the bat.
The Start() method in this script does a single thing, it calls SpawnBatCapsuleFollower(). We do this so anyone reading the code later can tell exactly what we want to do, without needing comments.
using UnityEngine; public class BatCapsuleFollower : MonoBehaviour { private BatCapsule _batFollower; private Rigidbody _rigidbody; private Vector3 _velocity; [SerializeField] private float _sensitivity = 100f; private void Awake() { _rigidbody = GetComponent<Rigidbody>(); } private void FixedUpdate() { Vector3 destination = _batFollower.transform.position; _rigidbody.transform.rotation = transform.rotation; _velocity = (destination - _rigidbody.transform.position) * _sensitivity; _rigidbody.velocity = _velocity; transform.rotation = _batFollower.transform.rotation; } public void SetFollowTarget(BatCapsule batFollower) { _batFollower = batFollower; } }
The BatCapsuleFollower script is responsible for following the bat capsule…
The work for that is done in FixedUpdate(). To follow the BatCapsule, it gets the position it wants to be in, then subtracts it from the current position. That distance is multiplied by a sensitivity value that can be adjusted in the editor. For my game, I found a value of about 50 worked well. It sets the rigidbody velocity to that calculated value which makes it move toward the BatCapsule.
Next it adjusts the rotation to match the BatCapsule. If we don’t do that, it will end up sideways.
Important – Physics Layers
When setting up your BatCapsuleFollower, make sure the Layer it’s on does not collide with itself. If you don’t, the BatCapsuleFollowers will be bouncing off each other, not doing what you want.
To get to the physics settings, click Assets->Project Settings->Physics
In here, you need to make sure the layer you’ve chosen for your BatCapsuleFollowers (please put them on their own layer), does not collide with itself. It should also not collide with anything other than the ball (unless there’s something else you want to hit). You can see I’ve set mine up to do exactly that.
Conclusion
With it setup like this, the BatCapsuleFollowers will move at different velocities, causing the outer most one to hit much further than the innermost. While this could be further tuned to make a real sweet spot on the bat, I’ve found that this functionality works well enough.
Project Download
Get Some Bats
A good friend of mine made the awesome bats featured here and they’re available on the asset store. If you wanna do some baseball stuff and support him, go grab one now 🙂 | https://unity3d.college/2016/04/11/baseball-bat-physics-unity/ | CC-MAIN-2020-34 | refinedweb | 863 | 54.12 |
#include <hallo.h> * Manoj Srivastava [Fri, Jun 09 2006, 02:02:48PM]: > On 9 Jun 2006, Christoph Berg said: > > This is also my impression. CDBS might be nice to automate the task > > "make a .deb out of this Gnome source", but imho it completely fails > > when you want to deviate from the "standard" in any way. > > I am surprised to hear you say so, since CDBS is one of the > most configurable build systems out there. You can add commands to > any phase of the build, by just adding targets/dependencies/variables. Oh, really? The last time I tried to add a custom command to the install rule (well, >> 1 year ago) it was a real PITA. Docs have not told me how it works, docs have not told me in an understandable language how to add extensions, and after trying to find a proper way to insert a command by myself, I gave up and threw it away. So I have to agree with Martin here. > >. Choice of language is not an excuse. You can write a lot of ugly things with Makefiles. > > Again, I'm fine if you use CDBS for your package, but please never > > recommend it to any new maintainer. > > Why would this not apply to any other helper packages as well? Because there is documentation telling what is going behind the scenes? Like understandable manpages for every debhelper command. Eduard. | https://lists.debian.org/debian-devel/2006/06/msg00477.html | CC-MAIN-2017-17 | refinedweb | 235 | 80.21 |
Chatlog 2009-03-25
From OWL
See original RRSAgent log and preview nicely formatted version.
Please justify/explain all edits to this page, in your "edit summary" text.
16:59:36 <IanH> IanH has changed the topic to: 17:02:21 <Zhe> scribenick: Zheernardo, bmotik, Michael_Schneider,> +bernardo 17:02:48 <Zhe> Topic: Admin 17:02:51 <Zakim> +alanr; got it 17:02:53 <bernardo> Zakim, mute me 17:02:55 <ivan> zakim, mute me 17:03:04 <Zakim> sorry, bernardo, I do not know which phone connection belongs to you 17:03:07 <msmith> msmith has joined #owl 17:03:09 <Zakim> Ivan should now be muted 17:03:18 <pfps> q+ 17:03:24 <zimmer> zimmer has joined #owl 17:03:33 <alanr> ack pfps 17:03:40 <Zakim> +msmith 17:03:43 <Zakim> sorry, bernardo,, bernardo, msmith 17:03:55 <bernardo> Zakim, mute me 17:04:01 <Zakim> sorry, bernardo, I do not know which phone connection belongs to you 17:04:04 <Zakim> On IRC I see zimmer, msmith, MarkusK_, Rinke, Zhe, uli, bernardo, bmotik, schneid, pfps, Zakim, RRSAgent, alanr, bparsia, ivan, IanH, sandro, trackbot 17:04:25 <Zhe> PROPOSED: accept previous minutes 17:04:34 <pfps> minutes are minimally acceptable 17:04:40 <alanr> +1 17:04:41 <Michael_Schneider> sorry, possibly only on IRC today 17:04:42 <bernardo> Zakim, bernardo is bernardo 17:04:42 <Zakim> +bernardo; got it 17:04:48 <bernardo> Zakim, mute me 17:04:48 <Zakim> bernardo> SubTopic: publishingernardo (muted), msmith, zimmer 17:09:06 <Zakim> On IRC I see Zhe, jar, zimmer, msmith, Rinke, uli, bernardo, bmotik, Michael_Schneider ,: Documents and Reviewing 17:09:29 <Zhe> alanr: the only one not ready is rdf text. There is one involves our group. 17:09:52 <ivan> is the rdf semantics ready for review? I am not sure. RDF or RIF foks? 17:10:39 <bparsia> q+. Allow people to indicate which unicode version 17:11:54 <pfps> I'm confused, why can't regular regexp libraries be used? 17:12:07 <alanr> q? 17:12:30 <Zakim> +[IPcaller] 17:12:49 <Michael_Schneider> zakim, [IPcaller] is me 17:12:49 <Zakim> +schneid; got it 17:12:54 <Michael_Schneider> zakim, mute me 17:12:54 <Zakim> Michael_Schneider. I don't want it be a technical discussion. The focus is on how to resolve it asap. 17:13:28 <alanr> ack bmotik 17:13:39 <bparsia> Toolkit I want to use: 17:13:40 <Zhe> bmotik: a few technical questions. I did not understand bijan, what is the semantics if you indicate unicode version? 17:13:57 <bparsia> I meant in OWL 17:14:05 <bparsia> rdf:text:unicode3.1 17:14:12 <bparsia> Oh! Nice! 17:14:35 <ivan> q+ 17:14:39 <Zhe> ... if we go with finite alphabet, do we really know how many is in the unicode version. 17:14:46 <bparsia> q+ 17:14:48 <alanr> ack ivan 17:15:10 <Zhe> ivan: there is an upper limit for unicode characters.:40 <ivan> zakim, mute me 17:15:40 <Zakim> Ivan should now be muted 17:15:51 <Zhe> bparsia: it is nice to know about the libraries. makes me happy. My proposal is that in the ontology, when I define a class, I am using a particular unicode version. However, if it is finite, then all problems are solved 17:15:59 <sandro> -- from Addison Phillips, Chair -- W3C Internationalization WG 17:16:23 <alanr> q? 17:16:27 <alanr> ack bmotik <Michael_Schneider> zakim, unmute me 17:17:49 <Zakim> Michael_Schneider should no longer be muted 17:17:53 <Zhe> alanr: RDF semantics 17:18:09 <Zhe> Michael_Schneider: 3 or 4 more days. Should be done this weekend. 17:18:44 <Zhe> alanr: will you notify your reviewers 17:18:55 <Zhe> Michael_Schneider will send out email once it is done 17:19:00 <alanr> q? 17:19:03 <Michael_Schneider> zakim, mute me 17:19:03 <Zakim> Michael_Schneider should now be muted 17:19:18 <Zhe> SubTopic: Changes since last call 17:19:24 <sandro> 17:19:46 <Zhe> sandro: does not show correct deadline about n-ary 17:19:54 <bparsia> n-ary is not ready right now 17:20:02 <bparsia> Probablynext week 17:20:07 <Zhe> alanr: can we talk about n-ary? Bjian mentioned that we can put in a small example. When is it going to be ready? Next meeting or weekend? 17:20:10 <bparsia> I couldn't last week 17:20:17 <bparsia> But I can do it for next week 17:20:37 <alanr> q?, until that is done, the document is not ready for reviewing 17:22:37 <alanr> q? 17:23:08 <IanH> Anticipated changes vis a vis naming should be very small 17:23:31 <Zhe> ivan: the bottom line is that that document is not ready for reviewing 17:23:51 <Zhe> alanr: hopefully it is a small change.. I can post the diff I have made 17:26:11 <IanH> I looked -- looks fine to me. The introduction to QL is very technical oriented. Need to have more understandable rationale in profiles. 17:27:10 <alanr>. Can one editor stand up? 17:28:42 <pfps> -> Ian 17:28:58 <IanH> I'm 17:28:59 <Zhe> ... zhe, can you do it? 17:29:01 <Zhe> Zhe ok 17:29:06 <IanH> I'm willing to help 17:29:06 <alanr> q? 17:29:17 <Zhe> thanks Ian 17:29:25 <bparsia> +1 17:29:26 <Zhe> alanr: comment on XML syntax, looks ready to me 17:29:37 <Michael_Schneider> q+ 17:29:53 <ivan> q+ 17:29:54 <Zhe> alanr: one on rdf semantics, one on OWL DL the language. 17:29:54 <Michael_Schneider> q- 17:29:57 <alanr> q? 17:30:11 <Michael_Schneider> JR6a should be checked by people 17:30:15 <Zhe> ivan: on the rdf semantics, michael did more than required. It is not even last call document. <Michael_Schneider> ah, I understand! ok 17:31:09 <Zhe> alanr: not hearing object to sending out these two responses,ernardo (muted), msmith, zimmer, Michael_Schneider 17:31:40 <Zhe> Jonathan: regarding AR1, looks like there are some extension point to define new data types in new namespaces as long as they are compatible with other SPECs. Question is if there are some risk, you will get incompatibility. XML try to address it through namespaces. Here it seems that we are reusing an existing namespace 17:31:43 <Zakim> ... (muted), MarkusK_ 17:31:44 <Zakim> On IRC I see MarkusK_, Zhe, jar, zimmer, msmith, Rinke, uli, bernardo, bmotik, Michael_Schneider , pfps, Zakim, RRSAgent, alanr, bparsia, ivan, IanH, sandro, trackbot 17:31:54 <ivan> zakim, alanr has jonathan 17:31:54 <Zakim> +jonathan; got it 17:32:14 <bparsia> q+ 17:32:16 <pfps> q+:54 <bmotik> (Aside: For all I care, we can have a vote right now to change the value space of rdf:text) 17:32:58 <Zakim> Ivan should now be muted 17:33:31 <alanr> q? 17:33:34 <bparsia> zakim, unmute me 17:33:34 <Zakim> bparsia should no longer be muted 17:33:56 <Zhe> bparsia: we should not worry about the "risk." I dont' think there is a real risk that people are stepping into OWL namespace. Given that the community has matured, there should not be a problem 17:33:58 <Michael_Schneider> I will nevertheless wait another 24 hours with sending 52a, so people can check 17:34:24 <alanr> q? 17:34:27 <alanr> ack bparsia. 1) on the issue that bijan addressed. I think extensibility point is properly architected. 2) interop problem. 10 well defined datatypes, 5 are required by OWL. I suspect users will find their tool support more datatypes which creates interoperability problem 17:35:35 <bmotik> q+ 17:37:16 <bparsia> ? 17:37:22 <alanr> q?: I am a bit lost. Is this about xml schema datatypes? You cannot define new datatypes in XML schema namespace anyway. I don't see any issue 17:38:26 <alanr> q? 17:38:27 <pfps> q- 17:38:40 <alanr> ack bparsia 17:38:47 <jar> q? 17:38:52 <Zhe> bparsia: I agree with boris. It is clearly SPECed. I don't feel the need as a tool vendor that we need user permission to extend. I haven't seen troubles in the past 17:39:49 <alanr> q? 17:40:52 <Zhe> sandro: you don't think WG should give this advice 17:41:11 <Zhe> bparsia: not sure it is a good advice. Tool vendors should decide by themselves 17:41:43 <pfps> +1 17:42:08 <Zhe> sandro: I am persuaded by bijan. The worst case is that extension is used unexpectedly 17:42:15 <pfps> In many cases there may be *no* user to warn.. It is not our job to tell tools to pop up warnings 17:43:42 <jar> q? 17:44:11 <Zhe> bparsia: Pellet has a mode that approximates things that it cannot handle. We define what is mandatory. It is not the WG's job to define behavior for things beyond. 17:44:25 <jar> q+ jar 17:44:41 <alanr> ack bparsia 17:44:53 <alanr> ack bmotik, Michael, have you sent the response out? 17:48:52 <Michael_Schneider> zakim, unmute me 17:48:52 <Zakim> Michael_Schneider should no longer be muted 17:49:15 <Zhe> Michael_Schneider: the second was drafted by peter, I am out of it 17:49:20 <Michael_Schneider> zakim, mute me 17:49:20 <Zakim> Michael_Schneider, we should align with XML schema. Change is not that big. the only thing is that timezone is disjoint. I don't think users will get lots of problems 17:51:13 <alanr> q+ to ask why timezone but not the other elements of the 7 tuple 17:51:30 <sandro> q+ 17:51:39 <bmotik> Zakim, mute me 17:51:39 <Zakim> bmotik should now be muted, how is that aligned? 17:52:06 <alanr> q? 17:52:32 <Zhe> bmotik: the 7 tuple can map to a particular time you cannot tell the difference:52 <ivan> ack sandro 17:53:00 <Zhe> sandro: this is sounding like a bug in XML schema. The point is to reason different times in different zones. This behavior is not what I want as a user 17:53:04 <bmotik> q+ 17:53:08 <pfps> q+ 17:53:18 <bparsia> I recommend avoiding timezones in ontologies ;) 17:53:22 <bparsia> Preprocess! functional property p, s p t1, s p t2, and t1 and t2 are two values pointing to the same time point in two timezones, they violate the constraint. XML schema wants to keep this time zone information in the value space. Because they have functions to compare, it is not a bug in my opinion 17:54:39 <bparsia> Discussion of identity vs. equality I just wrote: <> 17:55:13 <pfps> q- 17:55:33 <bparsia> q+ to propose at riskiness 17:55:43 <alanr> q+ to ask one last question - why con consider this extralogical and answer queries against syntax. it is always possible to normalize all timestamp values with respect to timezones 17:57:40 <bparsia> zakim, mute me) Anyone use XQuery with OWL will find it difficult 17:59:03 <bparsia> One can do that as a preprocessing phase..if you wanted that 17:59:09 <alanr> q?. I am scared to use equality 18:00:24 <sandro> bparsia: OWL doesn't have the luxury of two operators, since counting works with identity.. It is not that often that two (same) values use different timezones 18:01:27 <bparsia> I agree with boris to some extent 18:01:38 <alanr> q? 18:01:39 <bparsia> It's work aroundable 18:01:52 <jar> q+ jar to wonder about calendar merging. Tools can normalize xsd:dateTimeStamp values in different zones 18:05:07 <sandro> bparsia: This all comes back to us having to chose between XS Identity and XS Equality as our Identity, and the compelling evidence is on the side of XS Identity.> Zhe 0 18:07:02 <bmotik> I.e., no at risk. 18:07:04 <bparsia> -1 18:07:07 <uli> no at risk 18:07:08 <bernardo> 0 18:07:10 <MarkusK_> 0 18:07:11 <Michael_Schneider>> Zhe +1 18:07:32 <msmith> +1 18:07:39 <MarkusK_> 0 18:07:42 <bernardo> +1 18:07:45 <Michael_Schneider> <Michael_Schneider> q+ on question about relevance for RDF-Based Semantics 18:09:01 <sandro> q+ 18:09:09 <alanr> ack Michael_Schneider 18:09:10 <Michael_Schneider> zakim, unmute me 18:09:11 <Zakim> Michael_Schneider, you wanted to comment on question about relevance for RDF-Based Semantics 18:09:13 <Zakim> Michael_Schneider was not muted, Michael_Schneider 18:09:30 <ivan> none 18:09:30 <bparsia> q+ 18:09:38 <Zhe> Michael_Schneider: point from me is that what is the relevance to us. CURIE is used only for representation purpose. I wonder what is the implication 18:09:48 <pfps> RDF-semantics should not be affected 18:10:01 <ekw> ekw has joined #owl 18:10:02 <alanr> ack bparsia 18:10:04 <ivan> Michael_Schneider: do not worry! Nothing <Michael_Schneider> zakim, mute me 18:10:53 <Zakim> Michael_Schneider <Michael_Schneider> Michael_Schneider: <Zhe> bmotik: final question, do we still call it CURIEs, 18:14:38 <pfps> use prefixed name, just like SPARQL 18:14:48 <ivan> +1 to pfps 18:14:49 <Zhe> alanr: I suggest not 18:14:52 <Zhe> bmotik: agree 18:14:58 <Zakim> +calvanese 18:15:00 <Zhe> bparsia: agree as well 18:15:01 <Michael_Schneider> "abbreviated IRIs" 18:15:08 <pfps> SPARQL syntax says IRIref ::= IRI_REF | PrefixedName 18:15:13 <Zhe> bparsernardo> +1 18:15:44 <msmith> +1 18:15:46 <Rinke> +1 18:15:47 <uli> +1 18:15:48 <Michael_Schneider> 0 18:15:49 <zimmer> +1 18:15:52 <sandro> +0 (haven't studied it enough) 18:15:55 <ekw> +0 18:16:06 <Zhe> Zhe +1 18:16:00 <alanr> RESOLVED: OWL will not rely on CURIEs spec but will define it's own IRI abbreviation mechanism compatible with the one used by SPARQL 18:16:32 <alanr> 18:16:36 <sandro> scribe: sandro 18:16:48 <sandro> alanr: Ian has written up how he understand we use Names 18:17:03 <calvanese> zakim, mute me 18:17:03 <Zakim> calvanese should now be muted 18:17:08 <sandro> alanr: Question 1 -- does this match your sense of the names 18:17:15 <Zhe> Zhe has joined #owl 18:17:17 <sandro> alanr:r:, make sure our usage in the document set follows this 18:28:10 <IanH> Peter (very kindly) check schema conformance already 18:28:20 <IanH> s/check/checked/ 18:28:35 <Zakim> -Ivan 18:28:53 <Michael_Schneider> bye 18:28:56 <Rinke> bye 18:28:57 <Zakim> -bernardo> -Michael_Schneiderernardo, zimmer, Michael_Schneider, # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000762 | http://www.w3.org/2007/OWL/wiki/Chatlog_2009-03-25 | CC-MAIN-2016-22 | refinedweb | 2,497 | 68.3 |
Which of these packages is automatically loaded, so you don't need to import it?
1. java.applet
2. java.string
3. java.awt
4. java.lang
4. java.lang 100%
Named constants make your programs clearer and easier to maintain. To create a constant in Java that can be used in all the methods in your class, you'll normally use all of these modifiers except ____________ .
1. final
2. const
3. static
4. public
2. const 100%
Method definitions should be placed:
1. after the class it is used in
2. before the class it is used in
3. inside the Java class library
4. inside the body of your class
4. inside the body of your class 100%
Which of these shows the correct way to print a literal number?
1. System.out.println(123.75);
2. System.out.println({123.75});
3. System.out.println("123.75");
4. System.out.println('123.75');
1. System.out.println(123.75); 100%
The generic term for methods that calculate and reutrn a value is a ____________________.
Student Response Value Correct Answer Feedback
1. algorithm
2. procedure
3. subroutine
4. function
4. function 100%
public int minimum(int x, int y)
{
int smaller;
if (x < y)
smaller = x;
else
smaller = y;
return smaller;
}
Based on the above code, what would be the output of the statement
int s = minimum(5, minimum(3, 7));
1. 7
2. 3
3. There would be no output; this is not a valid statement.
4. 5
2. 3 100%
Suppose that alpha and beta are int variables. The statement alpha = beta++; is equivalent to the statement(s) ____.
1. alpha = beta;
beta = beta + 1; 100%
2. None of these
3. alpha = 1 + beta;
4. alpha = alpha + beta;
1. alpha = beta;
beta = beta + 1; 100%
If s is a String object, which of these relational expressions is illegal (that is, your code will not compile if you use it) :
A. s == "Hi"
B. s != "Hi"
C. s > "Hi"
D. All of these are legal
C. s > "Hi" 100%
If an account balance is $1,000 or more, you pay interest. If the balance is less than $1,000 you charge a fee. The best way to write this is to :
A. use two if statements
B. use a single if, else statement
C. use an if, else if, else statement
D. use a switch statement
B. use a single if, else statement 100%
In a for loop, which of the following is executed first?
1. logical expression
2. for loop statement
3. update expression
4. initial expression
4. initial expression 100%
A loop that searches for a particular value in input is called a(n) :
A. counted [or counting] loop
B. sentinel loop
C. data loop
D. endless loop
B. sentinel loop 100%
int x, y;
if (x > 5)
y = 1;
else if (x < 5)
{
if (x < 3)
y = 2;
else
y = 3;
}
else
y = 4;
Based on the code above, if the value of y is found to be 2, what is a possible value of x?
1. 3
2. 6
3. 2
4. 5
3. 2 100%
Given the following switch statement where x is an int:
switch (x)
{
case 3 : x += 1;
case 4 : x += 2;
case 5 : x += 3;
case 6 : x++;
case 7 : x += 2;
case 8 : x--;
case 9 : x++
}
If x is currently equal to 5, what will the value of x be after the switch statement executes?
A. 8
B. 9
C. 10
D. 11 100%
E. 12
D. 11 100%
What is the output of the following Java code?
int x = 55;
int y = 5;
switch (x % 7)
{
case 0:
case 1:
y++;
case 2:
case 3:
y = y + 2;
case 4:
break;
case 5:
case 6:
y = y - 3;
}
println(y);
1. None of these
2. 2
3. 8
4. 5
2. 2 100%
What is wrong, logically, with the following code?
if (x > 10)
println("Large");
else if (x > 6 && x <= 10)
println("Medium");
else if (x > 3 && x <= 6)
println("Small");
else
.println("Very small");
A. There is no logical error, but there is no need to have (x <= 10) in the second conditional or (x <= 6) in the third conditional
B. There is no logical error, but there is no need to have (x > 6) in the second conditional or (x > 3) in the third conditional
C. The logical error is that no matter what value x is, "Very small" is always printed out
D. The logical error is that no matter what value x is, "Large" is always printed out
E. There is nothing wrong with the logic at all
A. There is no logical error, but there is no need to have (x <= 10) in the second conditional or (x <= 6) in the third conditional 100%
Which of the following is not a function of the break statement?
1. To skip the remainder of a switch structure
2. To eliminate the use of certain boolean variables in a loop
3. To ignore certain values for variables and continue with the next iteration of a loop
4. To exit early from a loop
3. To ignore certain values for variables and continue with the next iteration of a loop 100%
What is the output of the following Java code?
int count = 1;
int num = 25;
while (count < 25)
{
num = num - 1;
count++;
}
println(count + " " + num);
1. 24 0
2. 25 0
3. 25 1
4. 24 1
3. 25 1 100%
Which of the following statements creates alpha, an array of 5 components of the type int, and initializes each component to 10?
(i) int[] alpha = {10, 10, 10, 10, 10};
(ii) int[5] alpha = {10, 10, 10, 10, 10}
1. None of these
2. Only (ii)
3. Only (i)
4. Both (i) and (ii)
3. Only (i) 100%
Which of the following about Java arrays is true?
(i) Array components must be of the type double.
(ii) The array index must evaluate to an integer.
1. None of these
2. Both (i) and (ii)
3. Only (ii)
4. Only (i)
3. Only (ii) 100%
boolean found = true;
while (found)
{
entry = readInt();
triple = entry * 3;
if (entry > 33)
found = false;
}
The above code is an example of a(n) ____ loop.
1. EOF-controlled
2. flag-controlled
3. counter-controlled
4. sentinel-controlled
2. flag-controlled 100%
To respond to events in your graphical applets or applications, you'll need to import the ____________ package.
1. java.awt.event
2. java.event
3. javax.swing
4. java.awt
1. java.awt.event 100%
Assume the assignment statement: a = b; where a and b are type java.awt.Rectangle. This is called a _________ copy and we say that a and b both have ___________ semantics.
1. shallow, value
2. deep, value
3. shallow, reference
4. deep, reference
3. shallow, reference 100%
The Graphics drawPolyline() method uses ________________ to draw a series of lines.
1. a two-dimensional array of int
2. a one-dimensional array of Point
3. a parallel array of Points
4. two parallel arrays of int
4. two parallel arrays of int 100%
In the inheritance hierarchy Animal, Mammal, Rodent, "Mickey Mouse", Mammal is a _____________ of Animal.
1. subclass
2. instance
3. ancestor
4. generalization
5. superclass
1. subclass 100%
What is stored in alpha after the following code executes?
int[] alpha = new int[5];
int j;
for (j = 0; j < 5; j++)
{
alpha[j] = j + 1;
if (j > 2)
alpha[j - 1] = alpha[j] + 2;
}
1. None of these
2. alpha = {1, 2, 3, 4, 5}
3. alpha = {1, 5, 6, 7, 5}
4. alpha = {4, 5, 6, 7, 9}
1. None of these 100%
When you instantiate an object from a class, ____ is reserved for each instance field in the class.
1. a field name
2. a constructor
3. memory
4. a signature
3. memory 100%
public class Illustrate
{
private int x;
private int y;
public Illustrate()
{
x = 1;
y = 2;
}
public Illustrate(int a)
{
x = a;
}
public void print()
{
System.out.println("x = " + x + ", y = " + y);
}
public void incrementY()
{
y++;
}
}
What does the default constructor do in the class definition above?
1. Sets the value of x to 0
2. Sets the value of x to a
3. There is no default constructor.
4. Sets the value of x to 1
4. Sets the value of x to 1 100%
A(n) ____ constructor is one that requires no arguments.
1. class
2. default
3. write
4. explicit
2. default 100%
If a class's only constructor requires an argument, you must provide an argument for every ____ of the class that you create.
1. parameter
2. type
3. object
4. method
3. object 100%
Consider the following class definition.
public class Rectangle
{
private double length;
private double width;
public Rectangle()
{
length = 0;
width = 0;
}
public Rectangle(double l, double w)
{
length = l;
width = w;
}
public void set(double l, double w)
{
length = l;
width = w;
}
public void print()
{
System.out.println(length + " " + width);
}
public double area()
{
return length * width;
}
public double perimeter()
{
return 2 length + 2 width;
}
}
Which of the following statements correctly instantiate the Rectangle object myRectangle?
(i) myRectangle Rectangle = new Rectangle(10, 12);
(ii) class Rectangle myRectangle = new Rectangle(10, 12);
(iii) Rectangle myRectangle = new Rectangle(10, 12);
1. Only (i)
2. Both (ii) and (iii)
3. Only (iii)
4. Only (ii)
3. Only (iii) 100%
You are ____ required to write a constructor method for a class.
1. sometimes
2. often
3. never
4. always
3. never 100%
Consider the following statements.
public class Circle
{
private double radius;
public Circle()
{
radius = 0.0;
}
public Circle(double r)
{
radius = r;
}
public void set(double r)
{
radius = r;
}
public void print()
{
System.out.println(radius + " " + area + " "
+ circumference);
}
public double area()
{
return 3.14 radius radius;
}
public double circumference()
{
return 2 3.14 radius;
}
}
Circle myCircle = new Circle();
double r;
Which of the following statements are valid in Java? (Assume that cin is Scanner object initialized to the standard input device.)
(i)
r = cin.nextDouble();
myCircle.area = 3.14 r r;
System.out.println(myCircle.area);
(ii)
r = cin.nextDouble();
myCircle.set(r);
System.out.println(myCircle.area());
1. None of these
2. Only (ii)
3. Both (i) and (ii)
4. Only (i)
2. Only (ii) 100%
Data types that contain a single value are known as ___________________ types.
1. fundamental
2. reference
3. structured
4. scalar
5. class
4. scalar 100%
Examine the following UML diagram:
Which of these fields or methods are inherited by the Person class?
1. name, getName(), setName(), getID()
2. toString()
3. None of them
4. studentID, name, getName(), setName(), getID()
5. getName(), setName(), studentID, getID()
6. getName(), setName(), name
2. toString() 100%
Errors made using formal parameter variables defined as enumerated types are ___________________:
1. compile time errors
2. logic errors
3. runtime errors
4. None of these
5. runtime exceptions
1. compile time errors 100%
Assume that you have an ArrayList variable named a containing 4 elements, and an object named element that is the correct type to be stored in the ArrayList. Which of these statements replaces the first object in the collection with element?
1. a[0] = element;
2. a.add(0, element);
3. a.set(element, 0);
4. a.set(0, element);
4. a.set(0, element); 100%
Which of the following lines of code explicitly calls the toString() method, assuming that pete is an initialized Student object variable?
1. println(pete.toString());
2. println("" + pete);
3. println(super.toString());
4. println(pete);
1. println(pete.toString()); 100%
If you do not specify a superclass when using inheritance, the class you are defining implicitly extends the __________________ class.
1. Object
2. Component
3. GObject
4. Root
5. java.lang
1. Object 100%
Which of the following statements is true?
1. The class Throwable, which is derived from the class Exception, is the superclass of the class Object.
2. None of these
3. The class Exception, which is derived from the class Object, is the superclass of the class Throwable.
4. The class Throwable, which is derived from the class Object, is the superclass of the class Exception.
4. The class Throwable, which is derived from the class Object, is the superclass of the class Exception. 100%
double[] as = new double[7];
double[] bs;
bs = as;
How many objects are present after the code fragment above is executed?
1. 14
2. 2
3. 1
4. 7
3. 1 100%
Polygon is a class that defines regular polygons (figures such as equilateral triangles, squares, and regular pentagons—polygons where all the sides have the same length). It has, among other public methods, one named area(), which takes no parameters, and returns as type double the area of the polygon.
Classes Square, EqiTriangle and Pentagon are derived from Polygon. Square and EqiTriangle each have, among other public member functions, one named area(), which takes no parameters and returns as type double the area of a Square and EqiTriangle, respectively. Pentagon does not define a method named area().
Suppose you wish to call Polygon's area() method in the definition of EqiTriangle's area() method; both area() methods have the same signature. How is this done?
A. (polygon)area();
B. super.area();
C. this.area();
D. area();
E. parent.area();
B. super.area(); 100%
double[][] vals = {{1.1, 1.3, 1.5},
{3.1, 3.3, 3.5},
{5.1, 5.3, 5.5},
{7.1, 7.3, 7.5}}
How many rows are in the array above?
1. 3
2. 0
3. 2
4. 4
4. 4 100%
Which of the following is an exception thrown by the methods of the class String?
Feedback
1. FileNotFoundException
2. NumberFormatException
3. NoSuchElementsException
4. NullPointerException
4. NullPointerException 100%
A car dealership needs a program to store information about the cars it has for sale. For each car, the want to keep track of the following information: number of doors (2 or 4), whether the car has air conditioning, and its average number of miles per gallon. Which of the following is the best design?
1. --------------------------------------------------
Use four unrelated classes: Car, Doors, AirConditioning and MilesPerGallon.
2. --------------------------------------------------
Use three classes: Doors, AirConditioning, and MilesPerGallon, each of which has a subclass Car.
3. --------------------------------------------------
Use one class, Car which has three instance variables: int numDoors, boolean hasAir and double milePerGallon.
4. --------------------------------------------------
Use a class Car which has three subclasses: Doors, AirConditioning and MilesPerGallon.
5. --------------------------------------------------
Use a class Car which has a subclass Doors, which has a subclass AirConditioning which has a subclass MilesPerGallon.
3. --------------------------------------------------
Use one class, Car which has three instance variables: int numDoors, boolean hasAir and double milePerGallon. 100%
Consider the following code segment.
for (int i = 0; i < 20; i = i + 2)
{
if (i % 3 == 1)
System.out.print(i + " ");
}
What is printed as a result of executing the code segment?
1. 4 16
2. 0 6 12 18
3. 4 10 16
4. 1 4 7 10 13 16 19
5. 0 2 4 6 8 10 12 14 16 18
3. 4 10 16 100%
Consider the following code segment:
ArrayList<String> list = new ArrayList<String>();
list.add("P");
list.add("Q");
list.add("R");
list.set(2, "s");
list.add(2, "T");
list.add("u");
What is printed as a result of executing this code segment?
1. [P, Q, R, s, T]
2. [P, Q, s, T, u]
3. [P, Q, T, s, u]
4. [P, T, Q, s, u]
5. [P, T, s, R, u]
3. [P, Q, T, s, u] 100%
Consider the following class definition:
public class TimeRecord
{
private int hours;
private int minutes; // 0 <= minutes < 60
public TimeRecord(int h, int m)
{
hours = h;
minutes = m;
}
/* @return the number of hours /
public int getHours() { / implementation not shown / }
/* @return the number of minutes /
public int getMinutes() { / implementation not shown / }
/** Adds h hours and m minutes to this TimeRecord.
* @param h the number of hours
* @param m the number of minutes
*/
public void advance(int h, int m)
{
hours = hours + h;
minutes = minutes + m;
/ missing code /
}
// ... other methods not shown
}
The instance variable minutes must always be at least 0 and less than 60, even when the time is changed. Which of the following code sections can be used to replace the / missing code / in the advance() method so that this condition remains true and so that the time is correctly advanced?
1. --------------------------------------------------
minutes = minutes + hours % 60;
2. --------------------------------------------------
hours = hours + minutes / 60;
3. --------------------------------------------------
minutes = minutes % 60;
4. --------------------------------------------------
hours = hours + minutes % 60;
minutes = minutes / 60;
5. --------------------------------------------------
hours = hours + minutes / 60;
minutes = minutes % 60;
5. --------------------------------------------------
hours = hours + minutes / 60;
minutes = minutes % 60; 100%
Consider the following class definition:
public class Gribble implements Comparable<Gribble>
{
// ... other methods not shown
}
Which of the following method signatures will satisfy the interface requirement that this class agreed to?
I. public int compareTo(Object other)
II. public int compareTo(Gribble other)
III. public boolean compareTo(Gribble other)
1. III only
2. I and II only
3. I, II and III
4. I only
5. II only
5. II only 100%
Assume that the following declarations have been made in a class:
private String s;
private int n;
public void changer(String x, int y)
{
x = x + "peace";
y = y * 2;
}
public void mystery()
{
s = "world";
n = 6;
changer(s, n);
System.out.println("s = " + s + ", n = " + n);
}
What is printed when the mystery() method is called?
1. s = peace, n = 12
2. s = world, n = 12
3. s = worldpeace, n = 6
4. s = worldpeace, n = 12
5. s = world, n = 6
5. s = world, n = 6 100% | http://quizlet.com/3751649/java-final-part-brian-flash-cards/ | CC-MAIN-2014-52 | refinedweb | 2,936 | 75.91 |
Hello,
How to get a percent of "Success / Total" events in a DataModel with "status=success or failed":
Trying to count both with one tstat on the main namespace:
| tstats count as Total count(eval(status=success)) as Success from datamodel=events where nodename=event
Trying to pipe results from 2 tstats:
| tstats prestats=t count as Total from datamodel=events where nodename=event | tstats prestats=t append=t count as Success from datamodel=events where nodename=event event.status=success
Adding: is_success = 0 or 1 on each event in the base Datamodel (with an eval) ? no
Thanks for your help,
Try this:
| tstats count("event.status") AS Total sum("event.is_success") AS "Success" sum("event.is_failed") AS "Failed" from datamodel="events" where (nodename="event") | eval Percentage=round(((100/Total)*Success),2)
I hope, this will solve your problem,
Greetz, Robert | https://community.splunk.com/t5/Splunk-Search/How-to-count-eval-filter-with-tstats/td-p/161276 | CC-MAIN-2020-34 | refinedweb | 141 | 60.85 |
I'm pleased to release our first ever round table, where we place a group of developers in a locked room (not really), and ask them to debate one another on a single topic. In this first entry, we discuss exceptions and flow control.
Should Exceptions Ever be Used for Flow Control?
Failure is the inability of a software element to perform its function. Exception is an abnormal condition in software. Errors are due to unmet expectation/specification.
Errors cause Failures and are propagated, via Exceptions.
So:
try { something(); } catch(SomeErrorType e){ return respondTo(e); }
Catching exceptions, as Csaba mentioned, is engineered to be fringe, unexpected cases. These are often a result of malformed input or failed data transfer.
Control flow structures in most languages are optimized to handle known cases, whether that's via an if/else stack or a switch/case. Error throwing, in general, wouldn't be as optimized as control flow in most programming languages.
The client of that code can use a try-catch to prevent the automatic propagation of the exception (as most languages will propagate it). However, I believe that try-catch should be used only for logic that is required to handle these unexpected situations. For example, a broken data transfer can be retried when the exception occurs. Or, some exceptions do not affect the outcome (such as when you want to create a directory that already exists). In this case, it can simply be suppressed.
display_errorsturned off in production.
Aaron, exactly. What you've said does not contradict what I said about catching errors. Thanks for pointing out the more detailed solution.
If the authentication fails, it fails because of fringe cases - bad input, or some problem outside of the actual authentication function. I think, in these instances, we're talking more about a definition of control flow and fringe case than whether or not exceptions are a good way to handle the process of logging in.
This is also an abstraction of the authentication function itself.
An acceptable alternative might be to return a login object that you could run an if/else control over.
$attempt = Sentry::authenticate($credentials); if ($attempt->status == "success"){ $user = $attempt->user; } else if ($attempt->status == "no_password") { // etc }
For example, in the authentication form, if the user name must be an email address and the user types in something else, that could be an exception.
But if the user fills in the form correctly and just the user/pass combination is not a match, that is more likely another case to treat, and not an exception.
if ($attempt->status == "success"){ $user = $attempt->user; } else if ($attempt->status == "failure"){ echo $attempt->message; } else { echo "Something unexpected happened. Please try again."
NodeJS has made a name for itself in this regard, and, for that matter, any "event-driven" runtime, like EventMachine or Twisted.
Csaba, here is a classic async code example:
Let's say that you are trying to read a file:
/usr/local/app.log. The way you do it in Node is:
var fs = require('fs'); // load the filesystem module fs.readFile('/usr/local/app.log', function(err, doc){ if (err) { console.log('failed to read'); return; } // process file });
Because of the callback, you don't put try/catch around the call. Instead, you use a callback style to handle the result. I hope that makes it clear. In general, any operation that cannot be performed synchronously will have a callback style API to handle the results.
.error(),
.success(), etc. This is an abstraction from what's really going on (checking the XHR object).
$.get("").success(function(data){ // do something with data }).error(function(res){ // do something different; res contains information about the error });
While this isn't either exception or if/else, it still solves the problem: handle both successful and non-successful AJAX requests. However, the jQuery implementation itself is not try/catch, because it is asynchronous.
var auth = require('authenticator'); var eventbus = EventBus.global(); auth.login("pavan", "pwd", function(err, result) { if (err) { var details = { username: "pavan", error: err }; eventbus.put("Authentication failed", details); return; } eventbus.put("Authentication successful!!"); });
Note that we are using the concept of a system-wide eventbus to propagate messages. In this case, we will have a success or failure message with the appropriate payload. This kind of message passing is common in distributed systems and is a great way to have a control flow that spreads across machines.
Other more generally understood cross-boundary control flow is none other than the venerable "Email" and "SMS" messages. It may not be obvious at first glance, but a little introspection and you will see how it is control flow of a different kind, and not done via exceptions.
You can disagree or raise hell in an email, but the receiver is told in a message that may arrive much later than the time it was sent (and may be too late).
try { $conn = connectToDatabase($credentials); } catch (NoDbAtThatUriException $e) { //handle it } catch (LoginException $e) { //handle it }
connectToDatabaseis synchronous, exceptions will work. Otherwise, you need callbacks. Also, there could be several forms of failures (different classes of Exceptions). Do you care what kind of failure it is - especially if you are logging it somewhere?
LoginExceptionhas to be notified to the user?
$e->handleIt().
I happened to have picked a bad example, because, in these situations, you basically need to get new data from the user. But, conceptually, you can have full logic, like:
catch (NoDbAtThatUriException $e) { $credentials->uri .= ":3065"; //add port //recall original function here }
Obviously, this isn't the best example and you would probably want to determine if a port is already there, instead of infinitely appending them. But what I'm saying is to use exceptions for actual code, besides simply "alerting" the user.
set_exception_handler()to handle any uncaught exceptions. You can also have a catch
(Exception $e)block as the last catch statement in the control flow, since all exceptions in PHP extend the native
Exceptionclass.
ifstatement is you will get a detailed error if you forget to accommodate one, whereas with an
ifstatement, it would merely default to the
else.
try { $user = $auth->login($credentials); } catch (InvalidUsernameException $e) { // Redirect to login page with error message // Could even use $e->getMessage() } catch (InvalidPasswordException $e) { // If this is the 5th attempt, redirect to reset password page // Otherwise, redirect to login page with error message } catch (AccountLockedException $e) { // Redirect to special error screen explaining why they aren't allowed } catch (Exception $e) { // Fallback for everything else // Log that we had an unexpected exception // Redirect to error page or something }
Eventually, even if you do decide where it should be handled, debugging is still a problem. If you revisit the code after a few months, you will have little clue what is going on.
The point that I am trying to make is, with control flows of any kind, you are building an implicit state machine and you should strive hard to keep all possible states of control localized in your code. With exceptions, that can be difficult.
returnin your functions to communicate both good and bad scenarios, you may and up with some very unpredictable functions. This is mostly a problem with dynamically-typed languages like PHP, where even the some built-in functions have this problem. For some reason, the people who made them decided that, for example, a function returns a string or an array on success, and
falseor
-1on failure.
There has to be a couple exception handlers before it reaches the user, starting from your DB layer to your web layer, and finally rendering it on the client. If you notice the control flow, which is currently spread across many layers, it can be difficult to handle smoothly. What's been your experience in this regard?
As for propagating exceptions between layers. In the project I work on at my job, most exceptions thrown are automatically propagated, and caught very close to the UI, at which point a message is presented to the user.
In other cases, such as creating a directory that already exists, the exception is simply caught by the client code, and discarded with something along the lines of:
try { $filesystemHandler->createDirectory('/tmp/dirname'); } catch (DirectoryExistsException $e) { return true; }
Of course, other exceptions, like
NoPermissionToCreateDirectory will be propagated. I think this is a good example of controlling flow based on an exception.
The user needs to be notified about the progress of the batch process, and eventually alerted when the process is complete. There could be failures, like invalid file, invalid transform, not enough disk space to store the file, etc.
Do you see a control flow here, more like a factory assembly line? How will this be modeled with exceptions, and with regular control constructs?
class ImageTransformer { private $images = []; private $transformer; private $failedTransforms = []; function __construct($images) { $this->images = $images; $this->transformer = new TransformOneImage(); } function transformAll() { foreach ($this->images as $image) { try { $this->transformer->transform($image); } catch (CanNotTranformImageException $e) { $this->failedTransforms[] = $e->getMessage(); } } if (!emptyArray($this->failedTransforms)) { // code to notify user here // and finally return false; } return true; } } class TransformOneImage { function transform($image) { $transformedImage = // do image processing here if (!$transformedImage) { throw new CanNotTranformImageException($image); } return $tranformedImage; } }
The real question here is really about what kinds of cases are considered exceptions. We could easily rewrite this to localize the errors, which would reduce the time needed to identify error source. Of course, with this example, it wouldn't be too difficult to identify a source.
trystatement, and keep in line with the transformations.
When authenticating a user, there essentially two main ways we could have tackled it:
- try/catch
- if/else
Let me explain the ways that we could have tackled if/else:
if (Sentry::authenticate($args)) { // Great, go ahead } else { // Right, something went wrong, but what? }
But, what happens if we want to find out a little more information than a simple "Nope, you're not allowed"? Getting an object back is a great approach:
$response = Sentry::authenticate($args); if ($response->hasError()) { switch ($response->getError()) { case 'login_required': $message = 'You didn\'t enter any login details.'; break; case 'password_required': $message = 'You didn\'t enter a password.'; break; case 'user_not_found': $message = 'No user was found with those credentials.'; break; // And so on... } // Let's pretend we're working in L4 return Redirect::route('login')->withErrors([$message]); } else { return Redirect::route('profile'); }
This has some advantages, primarily due to the switch statement:
if ($response->hasError()) { switch ($response->getError()) { // Consolidate errors case 'login_required': case 'password_required': case 'user_not_found': $message = 'No user was found with those credentials.'; break; // And so on... } return Redirect::route('login')->withErrors([$message]); } else { return Redirect::route('profile'); }
However, exceptions give a lot more control because you can let them be handled at any level within your application. Exceptions can also extend each other, while all extend the base
Exception class (obviously talking PHP here).
A downside to the Exceptions used (particularly with Sentry) is the verbosity of them. This is because we have separated out the different components within Sentry (groups / users / throttling) so that you can take the components you want and build a totally kickass auth system. So, everything that belongs to the 'users' component of Sentry sits in the
Cartalyst\Sentry\Users namespace. A dead-simple way to decrease verbosity is either through the
use keyword:
use Cartalyst\Sentry\Users\LoginRequiredException;. Or, of course, you can go ahead and add a
class_alias() for global aliasing of the class. All of the sudden, we bring the verbosity down to (and with some practical examples):
try { // Set login credentials $credentials = array( 'email' => 'john.doe@example.com', 'password' => 'test', ); // Try to authenticate the user $user = Sentry::authenticate($credentials); } catch (LoginRequiredException $e) { // Or a "goto", take your pick return $this->userFail(); } catch (PasswordRequiredException $e) { return $this->userFail(); } catch (UserNotFoundException $e) { return $this->userFail(); } catch (UserNotActivatedException $e) { // Take to a page where the user can resend their activation email return Redirect::to('users/activate'); } catch (UserSuspendedException $e) { return Redirect::to('naughty'); } catch (UserBannedException $e) { return Redirect::to('naughty'); } catch (Exception $e) { // Show a 500 page or something? } return Redirect::route('profile');
Verbosity is one downside to try/catch, but it can be decreased through the use of
use (bad wording there, right?) and class aliases.
Let's consider the positives:
- Logic can be handled at any level of the app or through a custom registered handler (at least in PHP).
- try/catch are "low-level". I mean this in the sense that they don't really change. In PHP, there is always $e->getMessage() and $e->getCode() (due to inheritence from "Exception"). If I return an object to (such as $response->hasError()), the developer needs to know the exposed API for that object. Also, the object may change in the future. try/catch is a syntax which I don't see changing. It's intuitive.
- The only real alternative to having multiple catches (with a catch-all) is switch. But the verbosity of a switch statemtn is much the same as try/catch.
- Mixing true/false and try/catch in the same statement is a recipe for confusion. As @philsturgeon said so well "With a mailer for example, a LOT of things can go wrong in the sending of an email, so you want to throw exceptions if the email fails to contact the SMTP server, if it fails to include a from address, if it cannot find the sendmail install, whatever. What if it doesnt have a valid email address? Is that an exception, or should it return false and make you look up an error code? Why half-and-half it?"
- In PHP, there's no such (real) thing as asynchronous. Before you all jump on my back about spawning processes and all that jank, PHP doesn't really support it. I don't see how using a callback can really improve the application (reminder: I'm talking PHP) or the user experience, as you can achieve the same "progress" feedback (through an app polling your script) throughout the process of whatever happens in the "try" block. 10 points for the worst explanation there, but I think the point still comes across. You can do everything the same through a try/catch as you can using a callback in a single-thread language.
I think, at the end of the day, it comes down to different use-cases. Some may argue that it's the same as "tabs vs spaces," but I don't believe it is. I think there are scenarios when an if/else is appropriate, but, if there is more than one possible non-successful outcome, I believe that a try/catch is usually the best approach.
Here's two more examples that we could discuss:
function create_user($name) { if (strlen($name) === 0) return false; if (user_exists($name)) return false; // This believe it or not returns a user :P return super_magic_create_method($name); } // EXAMPLE ONLY, DON'T SHOOT ME if ($user = create_user($_POST['name'])) { // Hooray } else { // What went wrong?! } function create_user($name) { if (strlen($name) === 0) throw new InvalidArgumentException("You have not provided a valid name."); if (user_exists($name)) throw new RuntimeException("User with name [$name] exists."); // This believe it or not returns a user :P return super_magic_create_method($name); } try { $user = create_user($_POST['name']); } catch (InvalidArgumentException $e) { // Tell the user that some sort of validation failed } // Yep, catch any exception catch (Exception $e) { }
And, for another example, in Laravel 4, there is a validation class. It works like so:
$rules = ['name' => 'required', 'email' => 'required|email']; $validator = Validator::make(Input::get(), $rules); if ($validator->passes()) { // Yay } else { foreach ($validator->errors() as $error) { } }
What if it worked like this?
try { $rules = ['name' => 'required', 'email' => 'required|email']; $validator = Validator::make(Input::get(), $rules); $validator->run(); } catch (ValidationFailedException $e) { foreach ($e->getErrors() as $error) { } } catch (InvalidRulesException $e) { // You made bad rules }
Just food for thought. The first one reads "prettier," but the second one could be seen as optimal. Any thoughts regarding this?
ifstatement should now be replaced with an exception. But they are far more "accurate" and readable than, for example. a function that returns
-1or
false.
Perhaps exceptions, then, are best suited for when something goes wrong by definition. For instance, when there is input error, when there is server error, etcetera.
Maybe a bad use-case scenario is catching exceptions for things that are expected and good cases. I certainly agree with what Ben laid out, particularly when it comes to what I have now (un)officially trademarked the Un-informed else block™. (You may affectionately refer to it as UEB™.) This poor catch-all doesn't have anything good to tell us, and unfortunately gets hit with all the bad, with little-to-no recourse. What a shame.
And, thus, we may happen upon an answer! To keep all of our control flow informed, and to use exceptions for things that are, indeed, exceptions.
I should be able to try to log in a user, and know why it failed. Now, the question is, how? I think the answer, as we have all agreed, is that it simply depends.
try { loginUser($creds); } catch (LoginSuccess $success) { //store session and cookies }
So my opinion is to use them for errors. Not limited to "unexpected" errors, but any errors.
Also a point worth adding is that it depends on what you are doing - whether you are performing an action or checking something.
In my Gist above, I had a method validate, and, within the code, I wrote something like:
if ($transform->validate()) { //process } else { //throw exception }
I could have thrown the exception inside the validate method, but that would be destructive to the code's readability. You would have something like:
$transform->validate(); $transform->process();
This makes it unclear what is going on. To summarize my point of view, I would say: use Exceptions when performing an action, and use
if statements when verifying data.
It also works out syntax-wise, because you can say "
if (x) then y", when you are checking data, and
try { x } catch (a snag) for actions. It wouldn't be grammatically correct to interchange them.
- There are multiple points of failure in a particular function/method.
- There is a need to discern the difference between those points of failure and handle them in more than one way.
I think the problem is that some people consider the language an impediment to expressing their ideas. Thus, a mix of if/else with try/catch makes the code harder to understand.
The problem, as I see it, is the exact opposite. The code should reflect the concepts that you want to implement. Using try/catch for everything, except for the happy path, hides a great deal of execution logic details. Anything becomes white or black.
On the other hand, if you use if/else for the cases when your application goes down paths that are part of its logic (like wrong username - passord pair), and then try/catch for the situations when your application gets into an unexpected / unrecoverable state, it would be much more obvious what the possible behavior and paths of execution are.
- Should I ever use JavaScript as a server-side language? No, because that's not what it was designed for.
- Should I use LESS to write CSS? No, because it might confuse someone.
- Is it ever okay to use static methods? No, it is never okay.
All extreme and not constructive.
I think that by presenting valid reasoning from both sides, we reach an even more important conclusion: not condemning other developers for not seeing eye-to-eye with ourselves. If any of you hand me some code, I'm not going to complain about your decision to use exceptions or not. I just want you to be consistent in your approach, and you better comment it! :)
I doubt this will change anything, but most developers could use some more humility and empathy - me especially. That's at least my goal in this: not to convince anyone that my way is the way, or even a better way, but to demonstrate that you shouldn't tell me that my way is flat-out wrong, because maybe we're not looking at it the same way.
It is worth noting why Exceptions are particularly suited for propagating errors. It's all about context and making sure this context is passed on to wherever is best to handle the exception. Most runtimes have an automatic way of bubbling up the exception and that makes it very convenient to inject exception handlers at the right places in the software stack.
You cannot define a bubbling route for exceptions using any language construct. The only possibility is to raise or throw them. It's the runtime's responsibility to do the bubbling. This implicit mechanism makes exceptions the ideal abstraction for propagating errors.
Your Turn
So that's what we have to say on this subject. What are your views? Can an argument be made for using exceptions for flow control? | http://esolution-inc.com/blog/round-table-1-should-exceptions-ever-be-used-for-flow-control--net-30947.html | CC-MAIN-2017-51 | refinedweb | 3,501 | 53.71 |
This is the winning article from the recent competition we ran to promote Flex articles on sitepoint.com.
I remember the first time I saw the Mini Configurator on the Mini USA site. I was blown away — I loved just playing with the colors and the options — it was a truly immersive experience of the kind that only Flash applications can offer, and I’m sure it sold more than a few Minis.
Ever since then I’ve wanted to create one of my own, and in this article I’ll show you how to do just that. For this example, I’ll assume that you’re across the basics of working with Flex 3 — that you’re set up with an editor such as Flex Builder 3, are familiar with MXML and ActionScript 3, and can create basic applications. If you’re after a beginner’s tutorial, or need a refresher, try Rhys Tague’s beginner’s tutorial.
Getting Started
The idea of a "configurator" is pretty simple: provide a 3D model of some object, then allow the user to change the colors, or to hide and show various parts of the model.
To build your own configurator, you need three things.
- a 3D modeling tool
- a 3D model of an object (either something that you’ve downloaded or created yourself)
- a tool for displaying that model in a Flex application
As far as 3D modelling tools go, I’ve found that Google’s Sketchup is ideal. It’s free, which is always good, but the other bonus is the huge database of models in Sketchup format that are available in the Sketchup 3D Warehouse.
After downloading and installing Sketchup, I visited the 3D warehouse and selected a model to use in my application. For this article, I’ve chosen one of the light cycles from the Disney movie Tron. My decision to use this object was partly cultural (every true geek loves the movie) and partly because it’s a fairly simple shape, so allowing the user to alter it (for example, by changing its color) wouldn’t be too complicated.
The light cycle model I’ve used is displayed in the image below.
My model light cycle didn’t actually exist in the exact format pictured above. To begin with, there were a pair of light cycles — I deleted the second one, and tilted the remaining cycle a little, in order to begin with a shape that was perfectly vertical. I’d recommend you do the same for your own model.
You should also reposition your model so that it’s centred around the origin point (this point is represented by the intersection of the red, green, and blue lines in Sketchup). This step is really important, because when you load the model you want to know where it exists in your scene. If the model is floating off in space somewhere, it’s going to be difficult to find, so be sure it’s oriented around the point (0, 0, 0).
To import the light cycle into your Flex application, you need to save it in a standard file format. The format that seems to be the most popular for 3D tasks is Collada, but unfortunately the Export to Collada format functionality is only available in Sketchup Pro.
Luckily, I have a trick up my sleeve: if you export the model in Google Earth format, there’s a Collada file hidden in the Google Earth
.kmz file. All you need to do is change the extension of the Google Earth file from
.kmz to
.zip, and unzip the file. Among the unzipped files you’ll find a directory named
models that contains the Collada model. Voila! You’ve exported a Collada file from the free version of Sketchup!
Installing PaperVision 3D
With our Collada model in hand, it’s time for us to find a way import our light cycle into a Flex application. Our first task is to select a 3D rendering engine for Flash to display it with. There are two free 3D engines to choose from at the moment; PaperVision 3D and Away3D. For this example, I’ve chosen PaperVision because of its integration with the ASCollada library, which is a very comprehensive Collada parser for ActionScript.
To download the latest version of PaperVision 3D, perform an SVN checkout from the PaperVision Subversion repository. If you’re not comfortable working with Subversion, you can download the files that I’ve used to create this example from this article’s code archive.
Then create a new Flex application project (either using Flex Builder 3 or the Flex 3 SDK) and copy the
com and
org directories from the
GreatWhite branch into your Flex application project.
Next, create a new
assets folder, and copy the model file I exported from Sketchup into it; call that file
cycle.dae. If your model contains texture files, then you’ll need to copy those files into your Flex project as well. You’ll also need to edit the
.dae file (which is really just XML) to make sure that the texture objects point to the correct texture paths. Thankfully the little light cycle model that we’re using for this example doesn’t make use of any textures.
With everything in place, your project should look something like the image shown below.
The
assets directory holds the model and any textures it needs. And the
com and
org folders come from the PaperVision Great White code.
Viewing the Model
To get our feet wet, we’ll first try something very simple: viewing the model. The code for this Flex application, which I’ve called
model.mxml, is shown below:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:
<mx:Script>
<![CDATA[
import mx.core.UIComponent;
import org.papervision3d.cameras.FreeCamera3D;
import org.papervision3d.render.BasicRenderEngine;
import org.papervision3d.objects.parsers.DAE;
import org.papervision3d.objects.DisplayObject3D;
import org.papervision3d.scenes.Scene3D;
import org.papervision3d.view.Viewport );
var modelCol:DAE = new DAE();
modelCol.load( XML( new MyModel() ) );
var model:DisplayObject3D = scene.addChild( modelCol );
camera.y = -2000;
camera.z = 2500;
camera.lookAt( model );
addEventListener(Event.ENTER_FRAME,onEnterFrame);
}
private function onEnterFrame( event : Event ):void
{
renderer.renderScene(scene,camera,viewport);
}
]]>
</mx:Script>
<mx:Canvas
</mx:Application>
This is about as simple as 3D gets — which, admittedly, is not particularly simple. To render something in 3D, you need four pieces of information:
- The Scene: The layout of the model, or models, in space.
- The Viewport: The Flash sprite that will receive the rendered image.
- The Camera: This is the camera, or more specifically, the location and rotation of the camera within the scene.
- The Renderer: The engine which takes the scene and camera and renders the image for the viewport.
Without breaking down every line, our
onInit method, which is called when the application starts up, does the following:
- Load the model.
- Add the model to the scene.
- Position the camera.
- Have the camera look at the model.
Since our model is located at position (0, 0, 0), the code above moves the camera back from the model by adjusting the y and z coordinates. The
onEnterFrame method finishes the job by using the renderer to render the scene into the viewport.
Launch this application in Flex Builder 3, and you should see something like the view shown in Figure 3.
Not too shabby! In fact, what we’ve achieved here is quite significant — especially considering that Collada is actually a very complex XML standard. You may find that not all models exported from Sketchup will work with PaperVision — in fact, you’ll probably have to do some tweaking of both your Sketchup model (to simplify the shape) and your Flex application in order to produce something that works well.
One other important point to remember is that the more complicated your model, the longer it will take to load and render. You should therefore keep your model as simple as possible. For example, if your model is of a car, and you want to allow your users to choose the paint color of the chassis, then your model should not include any information about the car’s interior. All of that interior stuff represents unnecessary complexity that will result in lower performance and longer load times for your user.
Interacting With the Model
To keep things simple, we’ll restrict the changes that our configurator users can make to just one — the color of the light cycle. This means we’re going to change the color of the "material" used in the model. All 3D models are composed of polygons covered in a "material." That material can be colored, shaded, textured, bump-mapped, and distorted in all manner of ways. In this case, we’re going to use a shaded color material.
The code for our light cycle color configurator is shown below:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:
<mx:Script>
<![CDATA[
import mx.utils.ColorUtil;
import mx.core.UIComponent;
import org.papervision3d.materials.utils.MaterialsList;
â‹®
import org.papervision3d.lights.PointLight );
private var model:DisplayObject3D = null;
);
loadModel();
camera.y = -2700;
camera.x = 0;
camera.z = 2000;
camera.lookAt( model );
addEventListener(Event.ENTER_FRAME,onEnterFrame);
}
private function loadModel() : void {
if ( model != null )
scene.removeChild( model );
var light:PointLight3D = new PointLight3D( true,true );
light.z = -2000;
light.x = 500;
light.y = 500;
var lightColor:uint = 0x111111;
var modelCol:DAE = new DAE();
modelCol.scale = 1.1;
modelCol.load( XML( new MyModel() ), new MaterialsList( {
material0:new FlatShadeMaterial( light, 0x000000, lightColor ),
ForegroundColor:new FlatShadeMaterial( light, 0x000000, lightColor ),
material1:new FlatShadeMaterial( light, clrPicker.selectedColor,
lightColor ),
material2:new FlatShadeMaterial( light,
mx.utils.ColorUtil.adjustBrightness(clrPicker.selectedColor,-20), lightColor ),
FrontColor:new FlatShadeMaterial( light, 0xFFFFFF, lightColor ),
material3:new FlatShadeMaterial( light, 0x000099, lightColor ),
material4:new FlatShadeMaterial( light,
mx.utils.ColorUtil.adjustBrightness(clrPicker.selectedColor,-200), lightColor )
} ) );
modelCol.roll( 28 );
model = scene.addChild( modelCol );
light.lookAt(model);
}
private function onEnterFrame( event : Event ):void
{
renderer.renderScene(scene,camera,viewport);
}
]]>
</mx:Script>
<mx:Panel
<mx:Form>
<mx:FormItem
<mx:ColorPicker
</mx:FormItem>
</mx:Form>
</mx:Panel>
<mx:Canvas
</mx:Application>
As you can see, I’ve moved the code that loads the model into a new method called
loadModel. This method is executed when the Flex application starts up, as well as any time our user chooses a new color.
Our code also provides a
MaterialList object to the DAE parser that loads the Collada light cycle model. This list of material corresponds to the materials exported by Google Sketchup. I found the names by looking into the DAE file myself and experimenting with which materials changed which portions of the Cycle.
The material I chose for the color portions was a
FlatShadedMaterial. This material takes:
- a light source
- a color for the material
- a color for the light
We’re using the color provided by the color picker, and adjusting it to make it darker or lighter using the
ColorUtil Flex class.
Running our configurator application in Flex Builder produces the following result.
Our users can now select a color using the standard
ColorPicker control, and the light cycle model will change accordingly.
Providing a light source really adds to our model some depth that wasn’t apparent in the first rendering. Also, the fact that our model is clearly constructed from polygons actually adds to the whole "Tron" look and feel.
Where to Go From Here
There are lots of directions in which you could take this example. For example, you could hide or show various parts of the model by adjusting the
visible parameter on the
DisplayObject3Delements property within the model; you could add some direct interaction with the model by allowing the user to alter the position of the camera using the mouse; you could even use Flex effects on the model to make it glow, fade or zoom away when the customer takes it for a spin!
Whichever direction you take, you can have lots of fun with this code (don’t forget to download it!). I look forward to seeing your creations in an online store window soon!
No Reader comments | http://www.sitepoint.com/create-3d-product-viewer-flex-3/ | CC-MAIN-2015-32 | refinedweb | 2,018 | 56.25 |
#define SECONDS_PER_YEAR (60 * 60 * 24 * 365)UL
The purpose of this question is to test the following: #define syntax: no semi-colon at the end, the need to parenthesize hence the need for the L, telling the compiler to treat the variable as a Long Bonus: If you modified the expression with a UL (indicating unsigned long) Q: MIN macro, a macro that takes two arguments and returns the smaller of the two arguments #define MIN(A,B) ((A) < = (B) ? (A) : (B)) variable = condition ? value_if_true : value_if_false Ternary conditional operator: This operator exists in C because it allows the compiler to produce more optimal code than an if-then-else sequence. Given that performance is normally an issue in embedded systems, knowledge and use of this construct is important:
MACROs Macros are not type safe, and can be expanded regardless of whether they are syntatically correct
the compile phase will report errors resulting from macro expansion problems. Macros can be used in context where you don't expect, resulting in problems Macros are more flexible, in that they can expand other macros - whereas inline functions don't necessarily do this. Macros can result in side effects because of their expansion, since the input expressions are copied wherever they appear in the pattern. First, the preprocessor macros are just "copy paste" in the code before the compilation. So there is no type checking, and some side effects can appear:
MACRO:
#define max(a,b) ((a<b)?b:a)
The side effects appear if you use max(a++,b++) for example (a or b will be incremented twice). Instead, use (for example)
INLINE FN:
inline int max( int a, int b) { return ((a<b)?b:a); }
Infinite loops
Infinite loops often arise in embedded systems. How does you code an infinite loop in C?
There are several solutions to this question. My preferred solution is:
while(1) {….}
That is. the integer pointed to by ‘a’ is modifiable. . d) An array of 10 integers d) int a[10]. int const * a const. Const What does the keyword const mean? const means "read-only" const int a. (but outside the body of a function) is accessible by all functions within that module. it is a localized global Functions declared static within a module may only be called by other functions within that module. neither the integer pointed to by ‘a’. f) A pointer to an array of 10 integers f) int (*a)[10]. Static What are the uses of the keyword static? A variable declared static within the body of a function maintains its value between function invocations A variable declared static within a module. but the pointer is not) // a is a const pointer to a const integer (that is. b) A pointer to an integer b) int *a. nor the pointer itself may be modified). the integer isn't modifiable. h) An array of ten pointers to functions that take an integer argument and return an integer h) int (*a[10])(int). const int *a. g) A pointer to a function that takes an integer as an argument and returns an integer g) int (*a)(int). e) An array of 10 pointers to integers e) int *a[10]. c) A pointer to a pointer to an integer c) int **a.e. // ‘a’ is a const (read-only) integer // ‘a’ is a const (read-only) integer // ‘a’ is a pointer to a const integer (i. int const a. int * const a. give definitions for the following: a) An integer a) int a.Data declarations 5.e. but the pointer is) // ‘a’ is a const pointer to an integer (i. It is not accessible by functions within any other module. the scope of the function is localized to the module within which it is declared. Using the variable a. That is.
the remaining bits should be unmodified. status registers) Non-automatic variables referenced within an interrupt service routine Variables shared by multiple tasks in a multi-threaded application Can a parameter be both const and volatile ? Explain.Volatile What does the keyword volatile mean? Give three different examples of its use. However. return a * a. *ptr = 0xaa55. An example is when an ISR modifies a pointer to a buffer. Use #defines and bit masks. In both cases. A more obscure approach is: *(int * const)(0x67a9) = 0xaa55. It is const because the program should not attempt to modify it Can a pointer be volatile? Yes. } Accessing fixed memory locations On a certain project it is required to set an integer variable at the absolute address 0x67a9 to the value 0xaa55. write two code fragments. In particular. ptr = (int *)0x67a9. a = *ptr.b. Consequently. . This is a highly portable method and is the one that should be used. it is possible for a and b to be different. The correct way to code this is: long square(volatile int *ptr) { int a. However. although this is not very common. since *ptr points to a volatile parameter. An example is a read-only status register. My optimal solution to this problem would be: #define BIT3 (0x1 << 3) static int a. The exact syntax varies depending upon one's style. a = *ptr. the optimizer must be careful to reload the variable every time it is used instead of holding a copy in a register. This problem tests whether you know that it is legal to typecast an integer to a pointer in order to access an absolute location. the compiler can make no assumptions about the value of the variable. The compiler is a pure ANSI compiler. } The intent of the code is to return the square of the value pointed to by *ptr. I would typically be looking for something like this: int *ptr. the compiler will generate code: int square(volatile int *ptr) { int a. The second should clear bit 3 of ‘a’. A volatile variable is one that can change unexpectedly. } void clear_bit3(void) { a &= ~BIT3. } Bit manipulation Given an integer variable ‘a’. } Because it's possible for the value of *ptr to change unexpectedly. Examples of volatile variables are: Hardware registers in peripherals (for example. Write code to accomplish this task. What's wrong with the following function? int square(volatile int *ptr) { return *ptr * *ptr. void set_bit3(void) { a |= BIT3. It is volatile because it can change unexpectedly. return a * b. The first should set bit 3 of ‘a’. Yes. b = *ptr.
unsigned int compzero = 0xFFFF. Thus comes a very large positive integer and the expression evaluates to greater than 6. General rule of thumb is that ISRs should be short and sweet. printf() often has problems with reentrancy and performance.. On many processors/compilers. int b = -20. . The following code uses __interrupt to define an interrupt (ISR). and I expect that only the very best candidates will do well on them. return area. it's hard to know where to start: ISRs cannot return a value. whereas computer programmers tend to dismiss the hardware as a necessary annoyance. floating-point operations are not necessarily re-entrant. I wouldn't be too hard on you. good embedded programmers are critically aware of the underlying hardware and its limitations. This question really gets to whether the candidate understands the importance of word length on a computer. In some cases.. unsigned int zero = 0. Dynamic memory allocation Although not as common as in non-embedded computers. Code examples What does the following code output and why? void foo(void) { unsigned int a = 6. Comment on the following code fragment. /*1's complement of zero */ On machines where an int is not 16 bits. I'm looking more at the way the candidate tackles the problems. this will be incorrect. area). embedded systems do still dynamically allocate memory from the heap. and so on. rather than the answers. If you missed points three and four. __interrupt double compute_area (double radius) { double area = PI * radius * radius. These questions are hard. It should be coded: unsigned int compzero = ~0. This is a very important point in embedded systems where unsigned data types should be used frequently. problems with garbage collection. In my experience. } The answer is that this outputs "> 6". If you don't understand this. (a+b > 6) ? puts("> 6") : puts("<= 6"). one simply cannot do floating point in an ISR. } This function has so much wrong with it. printf("\nArea = %f". you aren't hired ISRs cannot be passed parameters. The reason for this is that expressions involving signed and unsigned types have all operands promoted to unsigned types.Interrupts keyword is __interrupt. Anyway. In posing these questions. What are the problems with dynamic memory allocation in embedded systems? Here. have fun. why doing floating-point math here? In a vein similar to the third point. variable execution time. I expect the user to mention memory fragmentation.
What does the following code fragment output and why? char *ptr.p2. the above code will output "Got a valid pointer. Which method.p4. I stumbled across this only recently when a colleague of mine inadvertently passed a value of 0 to malloc and got back a valid pointer! That is. tPS p3. Thus. Consider the declarations: dPS p1. and came up with the "maximum munch" rule. The answer is the typedef is preferred." I use this to start a discussion on whether the interviewee thinks this is the correct thing for the library routine to do. this is perfectly legal syntax. as. if ((ptr = (char *)malloc(0)) == NULL) puts("Got a null pointer"). Obscure syntax C allows some appalling constructs. The second example correctly defines p3 and p4 to be pointers. and c = 12. else puts("Got a valid pointer"). p2. this code is treated as: c = a++ + b. The intent in both cases is to define dPS and tPS to be pointers to structure s. is preferred and why? This is a very subtle question. The question is how does the compiler treat it? Those poor compiler writers actually debated this issue. which stipulates that the compiler should bite off as big (and legal) a chunk as it can. This is a fun question. The first expands to: struct s * p1. which is probably not what you wanted. and anyone who gets it right (for the right reason) is to be congratulated or condemned. This question is intended to be a lighthearted end to the quiz. Hence. It defines p1 to be a pointer to the structure and p2 to be an actual structure. consider the following code fragment: #define dPS struct s * typedef struct s * tPS. Is this construct legal. a = 6. For instance. b = 7. and if so what does this code do? int a = 5. c = a+++b. Getting the right answer here is not nearly as important as the way you approach the problem and the rationale for your decision. . Typedef Typedef is frequently used in C to declare synonyms for pre-existing data types. after this code is executed. It is also possible to use the preprocessor to do something similar. b = 7. c. if any. believe it or not. | https://www.scribd.com/document/88795857/Embedded | CC-MAIN-2019-13 | refinedweb | 1,847 | 69.07 |
Mark Russell <mrussell8081 at pacbell.net> writes: > Hi Dave, Hi Mark. Please post Boost.Python questions to the C++-sig: > I am using Boost Python to wrap DirectX and have run into a problem > I can't seem to get around. Def seems to have trouble with > __stdcall this works ok for me when I call bind directly so I'm not > sure where to go from here. I have attached a short test example > cloned from the bind tests to show you where the problem is. I am > using msvc 6.5 and have tried both boost 1.29 and the latest from > CVS. Thanks in advance -- great library! I am very excited to be > working with it. Thanks, I'm excited that so many people are using it! > > Test Code: > > #define BOOST_BIND_ENABLE_STDCALL > #define BOOST_MEM_FN_ENABLE_STDCALL > > #include <boost/python.hpp> > using namespace boost::python; > > long __stdcall f_0() > { > return 17041L; > } > > > BOOST_PYTHON_MODULE_INIT(test_std_call) > { > def("f_0", f_0); > > } > > Error Message: > > Compiling... > stdcall.cpp > c:\boost\boost\python\detail\arg_tuple_size.hpp(61) : error C2664: 'struct > boost::python::detail::char_array<0> __cdecl > boost::python::detail::arg_tuple_size_helper(long (__cdecl *)(void))' : > cannot convert parameter 1 from 'long (__stdcall *)(void) > ' to 'long (__cdecl *)(void)' > This conversion requires a reinterpret_cast, a C-style cast or > function-style cast Right. The problem is that there are lots of Boost.Python components which don't account for the presence of "extralegal" constructs like __stdcall and __fastcall. In order to handle these automatically we'd need, at the very least, to extend boost/python/detail/arg_tuple_size.hpp to handle these cases, and I'm guessing that there are a quite a few other places like that in the code as well (e.g. boost/python/detail/returning.hpp). Here are your choices: 1. Implement the extensions yourself and submit a patch. We'd really love that! 2. Write "thin wrapper" functions around all the functions you want to wrap, and wrap the wrappers: long f_0_aux() { return f_0(); } ... def("f_0", f_0_aux); 3. Wait for us to get around to extending Boost.Python to deal with __stdcall. Joel may be working on an ActiveX project soon himself, so it's not an impossibility that it will happen. However, this is still somewhat of a long shot. -- David Abrahams dave at boost-consulting.com * Boost support, enhancements, training, and commercial distribution | https://mail.python.org/pipermail/cplusplus-sig/2002-November/002212.html | CC-MAIN-2016-50 | refinedweb | 385 | 60.11 |
On Wed, Sep 16, 2009 at 11:11:57PM -0400, Gregory Haskins wrote:> Avi> eventfd with the fastpath in Ira's rig? How do I signal the eventfd> (x86->ppc, and ppc->x86)?> Sorry to reply so late to this thread, I've been on vacation for thepast week. If you'd like to continue in another thread, please start itand CC me.On the PPC, I've got a hardware "doorbell" register which generates 30distiguishable interrupts over the PCI bus. I have outbound and inboundregisters, which can be used to signal the "other side".I assume it isn't too much code to signal an eventfd in an interrupthandler. I haven't gotten to this point in the code yet.> To take it to the next level, how do I organize that mechanism so that> it works for more than one IO-stream (e.g. address the various queues> within ethernet or a different device like the console)? KVM has> IOEVENTFD and IRQFD managed with MSI and PIO. This new rig does not> have> irqfd/eventfd like shm-signal or virtqueues to do shared-memory based> event> framework, governed by what you plug into it (ala connectors and devices).> > For instance, the vbus-kvm connector in alacrityvm chooses to put DEVADD> and DEVDROP hotswap events into the interrupt stream, because they are> simple and we already needed the interrupt stream anyway for fast-path.> > As another example: venet chose to put ->call(MACQUERY) "config-space"> into its call namespace because its simple, and we already need> ->calls() for fastpath. It therefore exports an attribute to sysfs that> allows the management app to set it.> > I could likewise have designed the connector or device-model differently> as to keep the mac-address and hotswap-events somewhere else (QEMU/PCI> users> Ira's hardware. But if you must bring this up, then I will reiterate> that you just design the connector to interface with QEMU+PCI and you> have that too if that was important to you.> > But on that topic: Since you could consider KVM a "motherboard> manufacturer" of sorts (it just happens to be virtual hardware), I don't> know why KVM seems to consider itself the only motherboard manufacturer> in the world that has to make everything look legacy. If a company like> ASUS wants to add some cutting edge IO controller/bus, they simply do> it. Pretty much every product release may contain a different array of> devices, many of which are not backwards compatible with any prior> silicon. The guy/gal installing Windows on that system may see a "?" in> device-manager until they load a driver that supports the new chip, and> subsequently it works. It is certainly not a requirement to make said> chip somehow work with existing drivers/facilities on bare metal, per> se. Why should virtual systems be different?> > So, yeah, the current design of the vbus-kvm connector means I have to> provide a driver. This is understood, and I have no problem with that.> > The only thing that I would agree has to be backwards compatible is the> BIOS/boot function. If you can't support running an image like the> Windows installer, you are hosed. If you can't use your ethernet until> you get a chance to install a driver after the install completes, its> just like most other systems in existence. IOW: It's not a big deal.> > For cases where the IO system is needed as part of the boot/install, you> provide> interface to vbus. Data-path stuff must be in the kernel for> performance reasons, and this is what I was referring to. I think we> are generally both in agreement, here.> > What I was getting at is that you can't just hand-wave the datapath> stuff. We do fast path in KVM with IRQFD/IOEVENTFD+PIO, and we do> device discovery/addressing with PCI. Neither of those are available> here in Ira's case yet the general concepts are needed. Therefore, we> have> virtio-vbus + vbus-ira-connector to use the vbus framework. Either> model can work, I agree.> Yes, I'm having to create my own bus model, a-la lguest, virtio-pci, andvirtio-s390. It isn't especially easy. I can steal lots of code from thelguest bus model, but sometimes it is good to generalize, especiallyafter the fourth implemention or so. I think this is what GHaskins triedto do.Here is what I've implemented so far:* a generic virtio-phys-guest layer (my bus model, like lguest) - this runs on the crate server (x86) in my system* a generic virtio-phys-host layer (my /dev/lguest implementation) - this runs on the ppc boards in my system - this assumes that the kernel will allocate some memory and expose it over PCI in a device-specific way, so the guest can see it as a PCI BAR* a virtio-phys-mpc83xx driver - this runs on the crate server (x86) in my system - this interfaces virtio-phys-guest to my mpc83xx board - it is a Linux PCI driver, which detects mpc83xx boards, runs ioremap_pci_bar() on the correct PCI BAR, and then gives that to the virtio-phys-guest layerI think that the idea of device/driver (instead of host/guest) is a goodone. It makes my problem easier to think about.I've given it some thought, and I think that running vhost-net (orsimilar) on the ppc boards, with virtio-net on the x86 crate server willwork. The virtio-ring abstraction is almost good enough to work for thissituation,2) the "desc" table (virtio memory descriptors, see virtio-ring)3) the "avail" table (available entries in the desc table)Parts 2 and 3 are repeated three times, to allow for a maximum of threevirtqueues per device. This is good enough for all current drivers.The guest side (x86 in my system) allocates some device-accessiblememory, and writes the PCI address to the device descriptor. This memorycontains:1) the "used" table (consumed entries in the desc/avail tables)This exists three times as well, once for each virtqueue.The rest is basically a copy of virtio-ring, with a few changes to allowfor cacheing, etc. It may not even be worth doing this from aperformance standpoint, I haven't benchmarked it yet.For now, I'd be happy with a non-DMA memcpy only solution. I can add DMAonce things are working.I've got the current code (subject to change at any time) available atthe address listed below. If you think another format would be betterfor you, please ask, and I'll provide it.'ve gotten plenty of email about this from lots of interesteddevelopers. There are people who would like this kind of system to justwork, while having to write just some glue for their device, just like anetwork driver. I hunch most people have created some proprietary messthat basically works, and left it at that.So, here is a desperate cry for help. I'd like to make this work, andI'd really like to see it in mainline. I'm trying to give back to thecommunity from which I've taken plenty.Ira | https://lkml.org/lkml/2009/9/21/335 | CC-MAIN-2017-47 | refinedweb | 1,196 | 61.67 |
sankarr26533925
31-12-2019
Form A has few fields and submit button. For form submission, we are using Submit to Rest End point(Sling AEM servlet) and adding redirect URL as Form B path
We will get ID value as part of sling AEM servlet response. This ID has to shown in Form B.
I have gone through all the below links and could not find how to get ID value in Form B. Please read my question clearly and provide your response. Thanks in advance.
How to return a value from submit action and use it in the redirect page?
AEM 6.0 Forms Help | Writing custom Submit action for adaptive forms-...
ksetti
07-04-2020
Hi Sankar,
If you haven't get the answer, you can try below. Just have this code in your submit servlet.
import com.adobe.aemds.guide.servlet.GuideSubmitServlet;
Map<String, String> redirectParameters = GuideSubmitServlet.getRedirectParameters(slingRequest);
redirectParameters.put("responseId", responseId);
GuideSubmitServlet.setRedirectParameters(slingRequest, redirectParameters);
You can capture this value using javascript callback function in submit button rule and pass this value to formB
guideBridge.submit({
error : function (guideResultObject) {//log message},
success : function (guideResultObject) { alert(guideResultObject.data.responseId); }
});
09-01-2020
@Mayank_Gandhi
Thanks for your response.
In the servlet where you are handling the request/response, append the ID in the form B path that you created and fetch it in the form.
==>I assume that ID should be added as query param in form B path. I want to show this ID in textbox(added in form fragment in Form B.
==> In this scenario, where we have to read this ID value and display in textbox. Is it from rule editor or some other place?) ]
2. Don't use 2 forms, rather have two fragments. Create button to submit in fragment 1 keeping the fragment 2 hidden, on callback hide the frag1 and set the field of fragment 2(form B in your case).
==>Form A is initial dialog for creating more than 10 different forms. I can't add that Form A and Form B as single fragment in single form because Form B(contains more than 8 childpanels) will be different for 10 different forms.
==> For example
Mayank_Gandhi
Employee
05-01-2020
You can try two things:
1. In the servlet where you are handling the request/response, append the ID in the form B path that you created and fetch it in the form. | https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager-forms/display-redirected-parameters-from-servlet-response-in/qaq-p/350800 | CC-MAIN-2020-50 | refinedweb | 403 | 65.52 |
Opened 2 years ago
Last modified 4 months ago
#20081 assigned New feature
Minimize the risk of SECRET_KEY leaks
Description
Paul:
We should consider generating it on the fly something like the way Horizon does:
The things we lose when the secret key changes (sessions and password reset links IIRC) are generally per-deployment specific anyway. There's still a lot of bikeshedding to be found behind that issue (should production servers have the ability to write this file), but it's not a bad approach for those same users who commit their secret keys to their public github repos.
Jacob:
Oh this is neat! So basically when the server starts up it writes out
a secret key file if it one doesn't already exist? I like it, seems
like a really good thing to do in core. Security-by-default and all
that.
How does this handle multiple web servers, though? Seems like if you
weren't careful you'd end up with differing secrets across your web
nodes, bad news. Couldn't we do something similar but store the key in
the database? That seems like it wouldn't be any more or less secure
than the filesystem, and would solve N=1 problems.
Paul:
Yeah, the multiple servers deployment its downside and the reason I originally objected to including it in Horizon. What it does is bump the user-facing issue (you have to deal with and understand a secret key) a bit further along the line. I'd be willing to guess that a majority of Django sites that leak the secret key fall into the N=1 category, or would do the correct thing WRT secret key if it weren't so easy to commit to the repo and was a specific action they had to take when N > 1. The downside of course is that there will always be a learning curve.
I'm -1 on storing the key in the database - databases tend to be easier to compromise than file systems, and people tend to be lazy about the security of their DB backups.
Change History (22)
comment:1 Changed 2 years ago by anonymous
comment:2 Changed 2 years ago by jdunck
Previous was me.
comment:3 Changed 2 years ago by Ryan Leckey <leckey.ryan@…>
My proposal would be the following:
- startproject does not set SECRET_KEY
- generate_secret_key (or some smaller name) be added to core (the command would just output the key so someone could pipe it to settings.py or some file referenced by settings.py
- runserver uses some default non-secret key (its development anyway) if SECRET_KEY is not set
comment:4 Changed 2 years ago by lukeplant
My own practice is to put secret key in a secrets.json file, which is parsed by settings.py. The more I think about it, the more benefits I can see:
- The code to use it from settings.py is really easy:
import json secrets = json.load(file("secrets.json")) SECRET_KEY = secrets['SECRET_KEY']
- It is less likely to be checked into source control due to the different file extension
- If it is checked into VCS, then it should be relatively simple to rebuild the VCS history without that single file, allowing you to republish your fixed repo.
- You can put more things in there - any other passwords etc. that should not be checked into source control.
- Django management commands could correctly parse and re-write this file automatically, unlike something that was Python syntax.
- Other systems can parse this file to extract passwords (I often need this in deployment)
comment:5 Changed 2 years ago by unaizalakain
I do that too but, like mentioned by Jacob:
How does this handle multiple web servers, though? Seems like if you
weren't careful you'd end up with differing secrets across your web
nodes, bad news.
comment:6 Changed 2 years ago by unaizalakain
- Cc unai@… added
comment:7 Changed 19 months ago by wim@…
For me, a very easy solution would work:
from secrets import SECRET_KEY
It is easier to remember to keep secrets.py out of your repo than to keep settings.py out of your repo, and I would like to have the default project layout to use this approach.
On multiple servers, just copy secrets.py to the other servers.
comment:8 Changed 19 months ago by erikr
- Cc eromijn@… added
comment:9 Changed 18 months ago by erikr
- Keywords nlsprint14 added
I think the horizon method is almost perfect. For N=1 deployments, it simply fixes the problem without any special effort being required. Yes, importing from a secrets module or JSON file also adds security, but simply require more effort than the horizon method.
For N>1 deployments, we could fall back to a form of the current management, which seems to mean that most people roll something themselves. That means we add no security for N>1, but improving it for N=1 is already a great step forward. So, the default setting for secret key would be to autogenerate or read, but it can be overridden like any other setting. The patch for all of this is fairly simple, and I would be happy to contribute it.
However, I see an important issue for which I do not have an answer yet: if someone moves from N=1 to N>1 deployment, they will silently start using different secret keys. No warning is given. Depending on chance, users will encounter random failures with some features, and the cause will not be obvious. If we want to make this change, we should try to find some way so that developers are reminded to change their secret key management when they move from N=1 to N>1.
I can't think of any particular way to do this. Django is not aware of how many servers are running. And when trying to validate some data which involves the secret key, we can only check whether or not it is valid, not whether it might be valid with the secret key that was configured on another server. Secret key misalignment is not distinguishable from any other issue. Any suggestions?
comment:10 Changed 15 months ago by bendavis78
- Cc bendavis78 added
I'm also +1 for auto-generating the key à la horizon. A good point is made in comment:9 that one potential problem is when moving from a "single" to "multi" deployment (eg, if a new dev is unfamiliar with SECRET_KEY they might be caught by surprise).
I think django.conf.Settings() should generate SECRET_KEY based on a new setting called SECRET_KEY_FILE. If that setting is empty, Settings.__init__ will fail in the same way it currently does if SECRET_KEY is empty. The new setting for SECRET_KEY_FILE can be accompanied by documentation (using comments in settings.py) saying that if the file does not exist it will be auto-generated, along with the caveat that the key must remain the same in multi-server environments. When starting a new project via "startproject", SECRET_KEY_FILE should default to an empty string. So whether a dev is running startproject or upgrading, they will be forced to look at the setting and understand what it does.
We could further protect devs from slipping up by storing known value in the database that is hashed using the secret key. This could be set during the auto-generate function. Then, during startup we could check the stored hash against the known value, and if they don't match raise an exception explaining that if you *know* the secret key must be changed and you understand the implications, then you should manually re-hash that known value in the database.
Also the function that generates the key should raise an exception if the file exists. In that case file should have to be removed or changed manually.
comment:11 Changed 15 months ago by bendavis78
Actually, after thinking it through a bit more, I might be -1 on auto-generating the key, but still +1 on having SECRET_KEY_FILE and possibly a hashed known value stored in the db to protect against change. A management command is more explicit, and forces the dev to be involved in the security process. I also think this process should be consistent whether or not DEBUG is True.
comment:12 Changed 15 months ago by aj7may
Maybe setting the default value in settings.py to something like the following would encourage some good practices:
SECRET_KEY = os.getenv('SECRET_KEY')
developers can then run the test server with the default secret key, while the real secret key remains a secret on the production app servers.
Inspired by
What do you think?
comment:13 Changed 15 months ago by aj7may
In addition, a generatesecret command, as mentioned earlier, could be implemented and used to generate those keys.
comment:14 Changed 15 months ago by aj7may
- Owner changed from nobody to aj7may
- Status changed from new to assigned
Here is a patch that accomplishes what I stated above. Looking for feedback.
comment:15 Changed 15 months ago by aj7may
- Has patch set
comment:16 Changed 15 months ago by PaulM
I left a comment on the PR, but to summarize here:
-1 on storing this in an environment variable.
It is already possible to follow that advice with Django, but I disagree strongly with the 12factor website about the security and practicality of that approach in the context of a typical Django deployment.
comment:17 Changed 15 months ago by anonymous
So there is which implements this. I like having a command to generate a secret key, and the idea of a secret key file. However I think that automatically generating the secret key is going to lead to footgun-ey behavior. This mentions N=1 deployments, however it doesn't account for migrations or the like. If we're creating this file for users, there is a good chance they won't know it even exists. This adds a file that they should be backing up/copying over during a server migration that they aren't even really aware exists. This is worse on a platform like Heroku, where if they leave this to generate the file by default, it's going to get a new value every time the app restarts due to the ephemeral FS. Last I looked Heroku restarted the dyno's at least once a day, and for a n=1 deployment there is a decent chance it's going to get idled out as well. Essentially I think the automatic generation is being too clever and implicit.
I also am not a giant fan of importing from django inside of the settings file. It seems like it's a good way to introduce circular imports that will be tricky to debug.
I think it would b ebetter for the PR to add a setting, SECRET_KEY_FILE which will be used to securely load the secret key from a file (doing all of the proper things to ensure that the file has safe permissions). There should also be the command that the PR has to generate such a file, or pass - to pipe it to stdout.
comment:18 Changed 15 months ago by aj7may
Got lots of feedback. I'm working on incorporating some of these ideas. I'll leave a note here when the PR has been updated.
comment:19 Changed 15 months ago by aj7may
Pull request updated:
comment:20 Changed 13 months ago by slurms
- Needs tests set
comment:21 Changed 7 months ago by berkerpeksag
- Needs documentation set
- Patch needs improvement set
- Version changed from 1.5 to master
comment:22 Changed 4 months ago by timgraham
- Needs documentation unset
- Needs tests unset
For posterity, a non-master-HEAD reference to the same file: | https://code.djangoproject.com/ticket/20081 | CC-MAIN-2015-35 | refinedweb | 1,971 | 68.2 |
BaseFileLock class¶
(Shortest import:
from brian2.utils.filelock import BaseFileLock)
- class brian2.utils.filelock.BaseFileLock(lock_file, timeout=- 1)[source]¶
Implements the base class of a file lock.
Attributes
Methods
Details
- is_locked¶
True, if the object holds the file lock.
Changed in version 2.0.0: This was previously a method and is now a property.
- timeout¶.
- acquire(timeout=None, poll_intervall=0.05)[source]¶
Acquires the file lock or fails with a
Timeouterror.
#is None, the default
timeout.
- release(force=False)[source]¶
Releases the file lock.
Please note, that the lock is only completly released, if the lock counter is 0.
Also note, that the lock file itself is not automatically deleted. | https://brian2.readthedocs.io/en/latest/reference/brian2.utils.filelock.BaseFileLock.html | CC-MAIN-2022-40 | refinedweb | 111 | 54.39 |
1. Overview
In this article, we'll look at the Groovy language features for pattern matching in Strings.
We'll see how Groovy's batteries-included approach provides us with a powerful and ergonomic syntax for our basic pattern matching needs.
2. Pattern Operator
The Groovy language introduces the so-called pattern operator ~. This operator can be considered a syntactic sugar shortcut to Java's java.util.regex.Pattern.compile(string) method.
Let's check it out in practice as a part of a Spock test:
def "pattern operator example"() { given: "a pattern" def p = ~'foo' expect: p instanceof Pattern and: "you can use slashy strings to avoid escaping of blackslash" def digitPattern = ~/\d*/ digitPattern.matcher('4711').matches() }
This is also pretty convenient, but we'll see that this operator is merely the baseline for some other, even more useful operators.
3. Match Operator
Most of the time, and especially when writing tests, we're not really interested in creating Pattern objects, but instead, want to check if a String matches a certain regular expression (or Pattern). Groovy, therefore, also contains the match operator ==~.
It returns a boolean and performs a strict match against the specified regular expression. Basically, it's a syntactic shortcut over calling Pattern.matches(regex, string).
Again, we'll look into it in practice as part of a Spock test:
def "match operator example"() { expect: 'foobar' ==~ /.*oba.*/ and: "matching is strict" !('foobar' ==~ /foo/) }
4. Find Operator
The last Groovy operator in the context of pattern matching is the find operator ~=. In this case, the operator will directly create and return a java.util.regex.Matcher instance.
We can act upon this Matcher instance, of course, by accessing its known Java API methods. But in addition, we're also able to access matched groups using a multi-dimensional array.
And that's not all — the Matcher instance will automatically coerce to a boolean type by calling its find() method if used as a predicate. Quoting the official Groovy docs, this means “the =~ operator is consistent with the simple use of Perl’s =~ operator”.
Here, we see the operator in action:
def "find operator example"() { when: "using the find operator" def matcher = 'foo and bar, baz and buz' =~ /(\w+) and (\w+)/ then: "will find groups" matcher.size() == 2 and: "can access groups using array" matcher[0][0] == 'foo and bar' matcher[1][2] == 'buz' and: "you can use it as a predicate" 'foobarbaz' =~ /bar/ }
5. Conclusion
We've seen how the Groovy language gives us access to the built-in Java features regarding regular expressions in a very convenient manner.
The official Groovy documentation also contains some concise examples regarding this topic. It's especially cool if you consider that the code examples in the docs are executed as part of the documentation build.
As always, code examples can be found on GitHub. | https://www.baeldung.com/groovy-pattern-matching | CC-MAIN-2021-17 | refinedweb | 475 | 53.61 |
SYSCONF(3) BSD Programmer's Manual SYSCONF(3)
sysconf - get configurable system variables
#include <unistd.h> long sysconf(int name);
This interface is defined by IEEE Std 1003.1-1988 ("POSIX"). arguments to exec(3) (including the environ- ment). _SC_CHILD_MAX The maximum number of simultaneous processes per user ID. _SC_CLK_TCK The number of clock Returns 1 if job control is available on this system, otherwise -1. _SC_SAVED_IDS Returns 1 if saved set-group-ID and saved set-user-ID is avail- able, otherwise -1. _SC_VERSION The version of ISO/IEC 9945 (POSIX 1003 parentheses by the expr(1) utility. _SC_LINE_MAX The maximum length in bytes of a text-processing utility's input line. _SC_RE_DUP_MAX The maximum number of repeated occurrences of a regular expres- sion permitted when using interval notation. _SC_2_VERSION The version of POSIX 1003 POSIX 1003_PAGESIZE The size of a system page in bytes. _SC_FSYNC Return 1 if the system supports the File Synchronisation Option, otherwise -1. _SC_XOPEN_SHM Return 1 if the system supports the Shared Memory Option, other- wise -1. _SC_SEM_NSEMS_MAX The maximum number of semaphores in the system or -1 if the sys- tem does not support the Semaphores Option. _SC_SEM_VALUE_MAX The maximum value a semaphore may have or -1 if the system does not support the Semaphores Option..
The sysconf() function may fail and set errno for any of the errors specified for the library function sysctl(3). In addition, the following error may be reported: [EINVAL] The value of the name argument is invalid.
pathconf(2), sysctl(3)
The sysconf() function conforms to IEEE Std 1003.1-1988 ("POSIX").
The sysconf() function first appeared in 4.4BSD.
The value for _SC_STREAM_MAX is a minimum maximum, and required to be the same as ANSI C's FOPEN_MAX, so the returned value is a ridiculously small and misleading number.. | http://mirbsd.mirsolutions.de/htman/sparc/man3/sysconf.htm | crawl-003 | refinedweb | 303 | 56.55 |
Calling tuple on pytz.all_timezones produces an empty tuple
Bug Description
The following program crashes on pytz-2014.10:
import pytz
if __name__ == '__main__':
assert tuple(pytz.
The following succeeds:
import pytz
if __name__ == '__main__':
list(
assert tuple(pytz.
I'm not sure what's going on here exactly. Something about the tuple invocation is bypassing your LazyList mechanism, but I can't figure out exactly what. The initial list invocation (bool also works) seems to force it to load and subsequent calls to work.
The following demonstrates the source of the problem:
from __future__ import print_function
class TestList(list):
def __iter__(self):
yield 1
yield 2
yield 3
if __name__ == '__main__':
x = TestList()
x.extend([4, 5, 6])
print(
print(
Basically it looks like in CPython there's a special case for calling tuple on a list which bypasses all the normal methods.
Based on the response on the linked issue and the additional link provided to http://
Work around the Python issue, or flag this as WONTFIX if that isn't possible. We can't backout lazy loading of these data structures.
Can you stop making your LazyList type a subclass of list and just have it implement the general protocol? Because that would be sufficient to work around the Python issue.
Yes, that would be a suitable work around. It just needs to look like a list and do lazy loading, so people not using the structure don't have to pay the startup cost.
FWIW, this manifests a similar but slightly different problem in Jython: list(pytz.
Additional detail: Against all my expectations, this works correctly on pypy-2.5.0 and does not error. | https://bugs.launchpad.net/pytz/+bug/1435617 | CC-MAIN-2017-34 | refinedweb | 279 | 65.32 |
Introduction to Akka Streams
If you’re new to the world of Akka, I recommend reading the first part of this series, Introduction to Akka Actors, before continuing. The rest of this article assumes some familiarity with the content outlined in that post, as well as a high-level understanding of Akka.
Why Akka Streams?
If you’re new to the world of stream processing, I recommend reading this blog A Journey into Reactive Streams
First and foremost — why use
Streams? What kind of advantage do they give us over the standard ways (e.g. callbacks) of handling data.
The answer is simple — it abstracts away from the imperative nature of how the data is inputted into the application giving us a declarative way of describing, handling it and hiding details that we don’t care about. Streaming helps you ingest, process, analyze, and store data in a quick and responsive manner.
Actors can be seen as dealing with streams as well: they send and receive series of messages in order to transfer knowledge (or data) from one place to another. It is tedious and error-prone to implement all the proper measures in order to achieve stable streaming between actors, since in addition to sending and receiving we also need to take care to not overflow any buffers or mailboxes in the process. Another problem is that Actor messages can be lost and must be retransmitted in that case. Failure to do so would lead to holes at the receiving side. When dealing with streams of elements of a fixed given type, Actors also do not currently offer good static guarantees that no wiring errors are made: type-safety could be improved in this case.
What is Akka Streams
Akka Streams is a module built on top of Akka Actors to make the ingestion and processing of streams easy. It provides easy-to-use APIs to create streams that leverage the power of the Akka toolkit without explicitly defining actor behaviors and messages. This allows you to focus on logic and forget about all of the boilerplate code required to manage the actor. Akka Streams follows the Reactive Streams manifesto, which defines a standard for asynchronous stream processing. Akka Streams provide a higher-level abstraction over Akka’s existing actor model. The Actor model provides an excellent primitive for writing concurrent, scalable software, but it still is a primitive; it’s not hard to find a few critiques of the model..
Akka Streams in nonblocking that means a certain operation does not hinder the progress of the calling thread, even if it takes a long time to finish the requested operation.
Stream Terminology
- Source: This is the entry point to your stream. There must be at least one in every stream.
Sourcetakes two type parameters. The first one represents the type of data it emits and the second one is the type of the auxiliary value it can produce when ran/materialized. If we don’t produce any we use the
NotUsedtype provided by
Akka.There are various ways of creating Source:
val source = Source(1 to 10)
source: akka.stream.scaladsl.Source[Int,akka.NotUsed] = ...
val s = Source.single("single element")
s: akka.stream.scaladsl.Source[String,akka.NotUsed] = ...
- Sink: This is the exit point of your stream. There must be at least one in every stream.The
Sinkis the last element of our
Stream. Basically it’s a subscriber of the data sent/processed by a
Source. Usually it outputs its input to some system IO.It is the endpoint of a stream and therefore consumes data. A
Sinkhas a single input channel and no output channel.
Sinksare especially needed when we want to specify the behavior of the data collector in a reusable way and without evaluating the stream
val sink = Sink.fold[Int, Int](0)(_ + _) //Creating a sink
sink:akka.stream.scaladsl.Sink[Int,scala.concurrent.Future[akka.Done]] =
- Flow: The flow is a processing step within the stream. It combines one incoming channel and one outgoing channel as well as some transformation of the messages passing through it.If a
Flowis connected to a
Sourcea new
Sourceis the result. Likewise, a
Flowconnected to a
Sinkcreates a new
Sink. And a
Flowconnected with both a
Sourceand a
Sinkresults in a
RunnableFlow. Therefore, they sit between the input and the output channel but by themselves do not correspond to one of the flavors as long as they are not connected to either a
Sourceor a
Sink.
val source = Source(1 to 3)
val sink = Sink.foreach[Int](println)
val doubler = Flow[Int].map(elem => elem * 2)
doubler: akka.stream.scaladsl.Flow[Int(Source Output),Int(Sink Input),akka.NotUsed(Result of Flow Operation)] =
val runnable = source via doubler to sink
Via the
via method we can connect a
Source with a
Flow. We need to specify the input type because the compiler can't infer it for us. As we can already see in this simple example, the flow
double is completely independent from any data producers and consumers. They only transform the data and forward it to the output channel. This means that we can reuse a flow among multiple streams.
- ActorMaterializer: To run a stream this is required. It is responsible for creating the underlying actors with the specific functionality you define in your stream. Since ActorMaterializer creates actors, it also needs an ActorSystem. It basically allocates all the necessary resources to run a stream. It is important to remember that even after constructing the stream by connecting all the source, sink and different operators, no data will flow through it until it is materialized
Source can be considered as publisher and Sink as subscriber.
Basics and working with Flows
Back-pressure: A possible problematic scenario is when the
Source produces values too fast for the
Sink to handle and can possibly overwhelm it. As it gets more data that it cannot process at the moment it constantly buffers it for processing in the future. In the context of Akka Streams back-pressure is always understood as non-blocking and asynchronous.
When we talk about asynchronous, non-blocking backpressure we mean that the operators available in Akka Streams will not use blocking calls but asynchronous message passing to exchange messages between each other, and they will use asynchronous means to slow down a fast producer, without blocking its thread.
Graph:A description of a stream processing topology, defining the pathways through which elements shall flow when the stream is running.
RunnableGraph:Graph type, indicating that it is ready to be executed.
It is important to remember that even after constructing the
RunnableGraph by connecting all the source, sink and different operators, no data will flow through it until it is materialized.
Basic Code
import akka.{Done, NotUsed}
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl._
import scala.concurrent.Future
object StreamExample {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("Sys")
implicit val materializer = ActorMaterializer()
val numbers = 1 to 100
//We create a Source that will iterate over the number sequence
val numberSource: Source[Int, NotUsed] = Source.fromIterator(() => numbers.iterator)
//Only let pass even numbers through the Flow
val isEvenFlow: Flow[Int, Int, NotUsed] = Flow[Int].filter((num) => num % 2 == 0)
//Create a Source of even random numbers by combining the random number Source with the even number filter Flow
val evenNumbersSource: Source[Int, NotUsed] = numberSource.via(isEvenFlow)
//A Sink that will write its input onto the console
val consoleSink: Sink[Int, Future[Done]] = Sink.foreach[Int](println)
//Connect the Source with the Sink and run it using the materializer
evenNumbersSource.runWith(consoleSink) }
- Complete
Streamby connecting
evenNumberswith
consoleSinkand running it by using
runWith.
Conclusion
Streaming is the ultimate game changer for data-intensive systems.It is best suited for big data-based applications.Main goal of Akka Stream is to build concurrent and memory bounded computations. Akka Streams also handles much of the complexity of timeouts, failure, backpressure, and so forth, freeing us to think about the bigger picture of how events flow through our systems.Here we learned more about Source,Sink and Flow and how can we run the stream using materializer.For more details you can follow the documentation:
In the next blog we will see more about graph and fan in and fan out functions. | https://medium.com/@arcagarwal/introduction-to-akka-streams-5155bd070e37 | CC-MAIN-2019-13 | refinedweb | 1,393 | 55.44 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Table of Contents:
Boost.Nowide is a library originally implemented by Artyom Beilis that makes cross platform Unicode aware programming easier.
The library provides an implementation of standard C and C++ library functions, such that their inputs are UTF-8 aware on Windows without requiring to use the Wide API. On Non-Windows/POSIX platforms the StdLib equivalents are aliased instead, so no conversion is performed there as UTF-8 is commonly used already.
Hence you can use the Boost.Nowide functions with the same name as their std counterparts with narrow strings on all platforms and just have it work.
Consider a simple application that splits a big file into chunks, such that they can be sent by e-mail. It requires doing a few very simple tasks:
int main(int argc,char **argv)
std::fstream::open(const char*,std::ios::openmode m)
std::remove(const char* file)
std::cout << file_name
Unfortunately it is impossible to implement this simple task in plain C++ if the file names contain non-ASCII characters.
The simple program that uses the API would work on the systems that use UTF-8 internally – the vast majority of Unix-Line operating systems: Linux, Mac OS X, Solaris, BSD. But it would fail on files like
War and Peace - Война и мир - מלחמה ושלום.zip under Microsoft Windows because the native Windows Unicode aware API is Wide-API – UTF-16.
This incredibly trivial task is very hard to implement in a cross platform manner.
Boost.Nowide provides a set of standard library functions that are UTF-8 aware on Windows and make Unicode aware programming easier.
The library provides:
argc,
argcand
envparameters of
mainuse UTF-8
cstdiofunctions:
fopen
freopen
remove
rename
cstdlibfunctions:
system
getenv
setenv
unsetenv
putenv
fstream
filebuf
fstream/ofstream/ifstream
iostream
cout
cerr
clog
cin
All these functions are available in Boost.Nowide in headers of the same name. So instead of including
cstdio and using
std::fopen you simply include
boost/nowide/cstdio.hpp and use
boost::nowide::fopen. The functions accept the same arguments as their
std counterparts, in fact on non-Windows builds they are just aliases for those. But on Windows Boost.Nowide does its magic: The narrow string arguments are interpreted as UTF-8, converted to wide strings (UTF-16) and passed to the wide API which handles special chars correctly.
If there are non-UTF-8 characters in the passed string, the conversion will replace them by a replacement character (default:
U+FFFD) similar to what the NT kernel does. This means invalid UTF-8 sequences will not roundtrip from narrow->wide->narrow resulting in e.g. failure to open a file if the filename is ilformed.
Why not provide both Wide and Narrow implementations so the developer can choose to use Wide characters on Unix-like platforms?
Several reasons:
wchar_tis not really portable, it can be 2 bytes, 4 bytes or even 1 byte making Unicode aware programming harder
fopen(const wchar_t*, const wchar_t*)in the standard library, so it is better to stick to the standards rather than re-implement Wide API in "Microsoft Windows Style"
Since the May 2019 update Windows 10 does support UTF-8 for narrow strings via a manifest file. So setting "UTF-8" as the active code page would allow using the narrow API without any other changes with UTF-8 encoded strings. See the documentation for details.
Since April 2018 there is a (Beta) function available in Windows 10 to use UTF-8 code pages by default via a user setting.
Both methods do work but have a major drawback: They are not fully reliable for the app developer. The code page via manifest method falls back to a legacy code page when an older Windows version than 1903 is used. Hence it is only usable if the targetted system is Windows 10 after May 2019.
The second method relies on user interaction prior to starting the program. Obviously this is not reliable when expecting only UTF-8 in the code.
Hence under some circumstances (and hopefully always somewhen in the future) this library will not be required and even Windows I/O can be used with UTF-8 encoded text.
As a developer you are expected to use
boost::nowide functions instead of the functions available in the
std namespace.
For example, here is a Unicode unaware implementation of a line counter:
To make this program handle Unicode properly, we do the following changes:
This very simple and straightforward approach helps writing Unicode aware programs.
Watch the use of
boost::nowide::args,
boost::nowide::ifstream and
boost::nowide::cerr/cout. On Non-Windows it does nothing, but on Windows the following happens:
boost::nowide::argsuses the Windows API to retrieve UTF-16 arguments, converts them to UTF-8 and replaces the original
argv(and optionally
env) to point to those, internally stored UTF-8 strings.
boost::nowide::ifstreamconverts the passed filename (which is now valid UTF-8) to UTF-16 and calls the Windows Wide API to open the file stream which can then be used as usual.
boost::nowide::cerrand
boost::nowide::coutuse an underlying stream buffer that converts the UTF-8 string to UTF-16 and use another Wide API function to write it to console.
Of course, this simple set of functions does not cover all needs. If you need to access Wide API from a Windows application that uses UTF-8 internally you can use the functions
boost::nowide::widen and
boost::nowide::narrow.
For example:
The conversion is done at the last stage, and you continue using UTF-8 strings everywhere else. You only switch to the Wide API at glue points.
boost::nowide::widen returns
std::string. Sometimes it is useful to prevent allocation and use on-stack buffers instead. Boost.Nowide provides the
boost::nowide::basic_stackstring class for this purpose.
The example above could be rewritten as:
stackstringand
wstackstringusing 256-character buffers, and
short_stackstringand
wshort_stackstringusing 16-character buffers. If the string is longer, they fall back to heap memory allocation.
The library does not include the
windows.h in order to prevent namespace pollution with numerous defines and types. Instead, the library defines the prototypes of the Win32 API functions.
However, you may request to use the
windows.h header by defining
BOOST_USE_WINDOWS_H before including any of the Boost.Nowide headers
Boost.Filesystem supports selection of narrow encoding. Unfortunatelly the default narrow encoding on Windows isn't UTF-8. But you can enable UTF-8 as default encoding on Boost.Filesystem by calling
boost::nowide::nowide_filesystem() in the beginning of your program which imbues a locale with a UTF-8 conversion facet to convert between
char
wchar_t. This interprets all narrow strings passed to and from
boost::filesystem::path as UTF-8 when converting them to wide strings (as required for internal storage). On POSIX this has usually no effect, as no conversion is done due to narrow strings being used as the storage format.
For Microsoft Windows, the library provides UTF-8 aware variants of some
std:: functions in the
boost::nowide namespace. For example,
std::fopen becomes
boost::nowide::fopen.
Under POSIX platforms, the functions in boost::nowide are aliases of their standard library counterparts:
There is also a
std::filebuf compatible implementation provided for Windows which supports UTF-8 filepaths for
open and behaves otherwise identical (API-wise).
On all systems the
std::fstream class and friends are provided as custom implementations supporting
std::string and
*::filesystem::path as well as
wchar_t* (Windows only) overloads for the constructor and
open. This is done so users can use e.g.
boost::filesystem::path with
boost::nowide::fstream without depending on C++17 support. Furthermore any path-like class is supported if it matches the interface of
std::filesystem::path "enough".
Note that there is no universal support for
path and
std::string in
boost::nowide::filebuf. This is due to using the std variant on non-Windows systems which might be faster in some cases. As
filebuf is rarely used by user code but rather indirectly through
fstream not having string or path support seems a small price to pay especially as C++11 adds
std::string support, C++17
path support and usage via
string_or_path.c_str() is still possible and portable.
Console I/O is implemented as a wrapper around ReadConsoleW/WriteConsoleW when the stream goes to the "real" console. When the stream was piped/redirected the standard
cin/cout is used instead.
This approach eliminates a need of manual code page handling. If TrueType fonts are used the Unicode aware input and output works as intended.
Q: What happens to invalid UTF passed through Boost.Nowide? For example Windows using UCS-2 instead of UTF-16.
A: The policy of Boost.Nowide is to always yield valid UTF encoded strings. So invalid UTF characters are replaced by the replacement character
U+FFFD.
This happens in both directions:
When passing a (presumptly) UTF-8 encoded string to Boost.Nowide it will convert it to UTF-16 and replace every invalid character before passing it to the OS.
On retrieval of a value from the OS (e.g.
boost::nowide::getenv or command line arguments through
boost::nowide::args) the value is assumed to be UTF-16 and converted to UTF-8 replacing any invalid character.
This means that if one somehow manages to create an invalid UTF-16 filename in Windows it will be impossible to handle it with Boost.Nowide. But as Microsoft switched from UCS-2 (aka strings with arbitrary 2 Byte values) to UTF-16 in Windows 2000 it won't be a problem in most environments.
Q: What kind of error reporting is used?
A: There are in fact 3:
Q: Why doesn't the library convert the string to/from the locale's encoding (instead of UTF-8) on POSIX systems?
A: It is inherently incorrect to convert strings to/from locale encodings on POSIX platforms.
You can create a file named "\xFF\xFF.txt" (invalid UTF-8), remove it, pass its name as a parameter to a program and it would work whether the current locale is UTF-8 or not. Also, changing the locale from let's say
en_US.UTF-8 to
en_US.ISO-8859-1 would not magically change all files in the OS or the strings a user may pass to the program (which is different on Windows)
POSIX OSs treat strings as
NULL terminated cookies.
So altering their content according to the locale would actually lead to incorrect behavior.
For example, this is a naive implementation of a standard program "rm"
It would work with ANY locale and changing the strings would lead to incorrect behavior.
The meaning of a locale under POSIX and Windows platforms is different and has very different effects.
It is possible to use Nowide library without having the huge Boost project as a dependency. There is a standalone version that has all the functionality in the
nowide namespace instead of
boost::nowide. The example above would look like
The upstream sources can be found at GitHub:
You can download the latest sources there: | https://www.boost.org/doc/libs/1_73_0/libs/nowide/doc/html/index.html | CC-MAIN-2020-29 | refinedweb | 1,882 | 54.52 |
List Comprehension vs Loop.
Diving deep in the world of Big Data we always looking for better tools to explore , operate and modify data. Any tool we would like to use will come with some advantages and disadvantages during the programming process.It is very important to understand when and how it is better to use python tools.
Lists are one one of the main built-in data structures in Python that can contain values of various data types.One of the most common operations on lists is “for loop” that can be easily replaced with list comprehension. Lots of developers call using list comprehension “Pythonic way”.
Using “for loop” For Filtering List
Let’s demonstrate a simple loop operation on a random list of numbers. Having a list of integers, lets exclude the odd numbers. For this task we have to create new list which we will fill with odd numbers while looping through our original list:
# creating list of 50 milllion integersfifty_mln_list = list(range(50_000_000))def exclude_odd(fifty_mln_list):
new_list=[]
for number in fifty_mln_list:
if number % 2 == 0:
new_list.append(number)
return new_list
Using built-in magic command “%%time” we can easily check how long it takes to execute the exclude_odd function on a list of ten million generated integer numbers:
%%timeit
exclude_odd(fifty_mln_list)>3.14 s ± 24.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
As we observe — it takes 5.96 seconds to complete one loop. Let’s move forward and test list comprehension on the same data.
Using “list comprehension” For Filtering List
def exclude_odd_comprehen(fifty_mln_list):
return [number for number in fifty_mln_list if number%2 ==0]%%timeit
exclude_odd_comprehen(fifty_mln_list)>2.42 s ± 29.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
“For loop” is almost 50% slower than a list comprehension (3.4/2.42≈1.29). And we just reduced five lines of code to one line! Cleaner and faster code? Great!
Python has a built-in function that allows to process and transform all the items in an iterable without using an explicit for loop, a technique commonly known as mapping. map() is useful when you need to apply a transformation function to each item in an iterable and transform them into a new iterable.
But when we have certain conditions that need to be applied prior to executing the map function — we involve filtering of data.
Testing “map” and “filter” function
Let’s create and test function that will add to the power 2 each odd integer number from list of 50 million numbers.
def power_2_odd(hundred_mln_list):
result = map(lambda x: x**2, filter(lambda x: x%2 == 0, \
fifty_mln_list))
return list(result)%%timeit
power_2_odd(fifty_mln_list)>11.7 s ± 112 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
As we observe — it takes 11.7 seconds to complete one loop. Which is pretty long time if we are planning to operate with large amount of data.
Let test the same data with the same conditions via new function that use list comprehension.
def power_2_odd_compr(hundred_mln_list):
return [x**2 for x in hundred_mln_list if x%2 == 0]%%timeit
power_2_odd_compr(fifty_mln_list)>8.03 s ± 35 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Using “map” and “filter” is about 70% slower than a list comprehension (11.7/8.03≈1.45). Telling us that list comprehension is not just more readable but also faster when we dealing with a-lot of data.
Avoid More Then Two Expressions in List Comprehensions
List comprehensions also support multiple “if” conditions. Multiple conditions at the same loop level are an implicit and expression. For example, say we want to filter a list of numbers to only even values grater than 4. Then we can use two following ways with “if” or “and” statement:
a = [1,2,3,4,5,6,7,8,9,10]
b = [x for x in a if x > 4 if x%2 == 0]
c = [x for x in a if x > 4 and x%2 == 0]
print(c == b)> True
We can specify condition at each level of looping after the for expression. Let’s filter a matrix so the only cells remaining are those divisible by 3 in rows that sum higher then 10. The code with list comprehension will be short but very difficult to read.
matrix = [[1,2,3],[4,5,6],[7,8,9]]
filtered = [[x for x in row if x%3 ==0]
for row in matrix if sum(row) >= 10]
print(filtered)> [[6], [9]]
We can save the number of lines but it is very difficult to understand code. It is better to avoid using more than two expressions in list comprehension. This could be two conditions, two loop or one condition and one loop.
Conclusion
List comprehension are works much faster than regular for loop or map and filter functions. It also make code more simple and readable.
It became very popular tool in Python, especially when we need to operate on list and return another list. But there is no way to “break” out of a list comprehension or put any comments inside of it.
List comprehension are with more than two expressions are very difficult to read and should be avoided. | https://ivanzakharchuk.medium.com/list-comprehension-vs-loop-985cbfe740d?source=post_internal_links---------4---------------------------- | CC-MAIN-2021-49 | refinedweb | 883 | 63.59 |
Source Code:
What is SignalRASP.NET SignalR is a library to add real-time web functionality to applications. Real-time web functionality is the ability to have server-side code push content to the connected clients as it happens, in real-time. It essentially allows your server-side C# code to invoke client-side JavaScript functions and vice-versa.
SignalR takes advantage of several transports, automatically selecting the best available transport given the client's and server's capabilities. SignalR takes advantage of WebSockets, an HTML5 API that enables bi-directional communication between the browser and server. SignalR will use WebSockets under the covers when it's available, and gracefully fall back to other techniques and technologies when it is, etc.
AssumptionsIt is assumed that you have .NET Core 3.1 and VS Code installed on your computer. The following walkthrough works well on Linux, Mac and Windows 10.
The MissionWe will build an ASP.NET Code 3.1 application that allows multiple users, using multiple browsers, to share and scribble on the same canvas.
Getting StartedLet's get started. Go into a terminal windows at suitable workspace folder on your computer and create an ASP.NET Core 3.1 web app as follows:
mkdir SignalrSketchpad cd SignalrSketchpad dotnet new webapp --no-https
Run the following command if you do not already have LibMan installed on your computer:
dotnet tool install -g Microsoft.Web.LibraryManager.Cli
libman install @aspnet/signalr -p unpkg -d wwwroot/lib/signalr --files dist/browser/signalr.js --files dist/browser/signalr.min.js
- Use the unpkg provider.
- Copy files to the wwwroot/lib/signalr destination.
- Copy only the specified files.
In the SignalrSketchpad project folder, create a Hubs folder. In the Hubs folder, create a DrawDotHub.cs file with the following code:
public class DrawDotHub: Hub { public async Task UpdateCanvas(int x, int y) { await Clients.All.SendAsync("updateDot",x, y); } public async Task ClearCanvas() { await Clients.All.SendAsync("clearCanvas"); } } The DrawDotHub class inherits from the SignalR Hub class. The Hub class manages connections, groups, and messaging. The UpdateCanvas and ClearCanvas methods can be called by a connected JavaScript client to draw and clear drawings respectively on all clients.
Configure SignalRAppend this code to the ConfigureServices() method in Startup.cs:
services.AddSignalR();Add this code to the Configure() method in Startup.cs. Put it inside the app.UseEndpoints() block:
endpoints.MapHub<DrawDotHub>("/drawDotHub");
Add SignalR client codeReplace contents of Pages\Index.cshtml with the following code:
@page <style> /* Some CSS styling */ .rightside { float: left; margin-left: 10px; } #sketchpad { float: left; height: 300px; width: 600px; border: 2px solid #888; border-radius: 4px; position: relative; /* Necessary for correct mouse co-ords in Firefox */ } #clear_button, #save_button { float: left; font-size: 15px; padding: 10px; -webkit-appearance: none; background: #feee; border: 1px solid #888; margin-bottom: 5px; } </style <h1>SignalR Sketchpad</h1 <div id="sketchpadapp"> <div class="rightside"> <button id="clear_button" onclick="tellServerToClear()">Clear Canvas</button> <br /> <canvas id="sketchpad" width="600" height="300"></canvas> </div </div> <script src="~/lib/signalr/dist/browser/signalr.js"></script> <script src="~/js/draw.js"></script>:
"use strict"; var connection = new signalR.HubConnectionBuilder().withUrl("/drawDotHub").build(); connection.on("updateDot", function (x, y) { drawDot(x, y, 8); }); connection.on("clearCanvas", function () { ctx.clearRect(0, 0, canvas.width, canvas.height); }); connection.start().then(function () { // nothing here }).catch(function (err) { return console.error(err.toString()); }); function tellServerToClear() { connection.invoke("ClearCanvas").catch(function (err) { return console.error(err.toString()); }); } ////////////////////////////////////////////////////// // Variables for referencing the canvas and 2dcanvas context var canvas, ctx; // Variables to keep track of the mouse position and left-button status var mouseX, mouseY, mouseDown = 0; // Draws a dot at a specific position on the supplied canvas name // Parameters are: A canvas context, the x position, the y position, the size of the dot function drawDot(x, y, size) { // Let's use black by setting RGB values to 0, and 255 alpha (completely opaque) var r = 0; var g = 0; var b = 0; var a = 255; // Select a fill style ctx.fillStyle = "rgba(" + r + "," + g + "," + b + "," + (a / 255) + ")"; // Draw a filled circle ctx.beginPath(); ctx.arc(x, y, size, 0, Math.PI * 2, true); ctx.closePath(); ctx.fill(); } // Keep track of the mouse button being pressed and draw a dot at current location function sketchpad_mouseDown() { mouseDown = 1; drawDot(mouseX, mouseY, 8); connection.invoke("UpdateCanvas", mouseX, mouseY).catch(function (err) { return console.error(err.toString()); }); } // Keep track of the mouse button being released function sketchpad_mouseUp() { mouseDown = 0; } // Keep track of the mouse position and draw a dot if mouse button is currently pressed function sketchpad_mouseMove(e) { // Update the mouse co-ordinates when moved getMousePos(e); // Draw a dot if the mouse button is currently being pressed if (mouseDown == 1) { drawDot(mouseX, mouseY, 8); connection.invoke("UpdateCanvas", mouseX, mouseY).catch(function (err) { return console.error(err.toString()); }); } } // Get the current mouse position relative to the top-left of the canvas function getMousePos(e) { if (!e) var e = event; if (e.offsetX) { mouseX = e.offsetX; mouseY = e.offsetY; } else if (e.layerX) { mouseX = e.layerX; mouseY = e.layerY; } } // Set-up the canvas and add our event handlers after the page has loaded // Get the specific canvas element from the HTML document canvas = document.getElementById('sketchpad'); // If the browser supports the canvas tag, get the 2d drawing context for this canvas if (canvas.getContext) ctx = canvas.getContext('2d'); // Check that we have a valid context to draw on/with before adding event handlers if (ctx) { //); } else { document.write("Browser not supported!!"); } The preceding code: - gets a 2d handle to the canvas element in the page - sets event listeners for mousedown, mousemove and mouseup events - a connection is established with the server-side hub at endpoint /drawDotHub - whenever the server invokes a function named updateDot() on the client, then the drawDot() JavaScript function is called - whenever the server invokes a function named clearCanvas() on the client, then the clearCanvas() JavaScript statement is executed - connection.start() establishes a live connection with the server - JavaScript function tellServerToClear() invokes method ClearCanvas() on the server - JavaScript function sketchpad_mouseDown() and sketchpad_mouseMove() invoke the UpdateCanvas() methods on the server. Run the app by typing “dotnet run” in a terminal window inside the project root folder. Point your browser to.
Use the concept to build much more sophisticated SignalR apps. | https://blog.medhat.ca/2020/02/build-simple-sketchpad-app-with-signalr.html | CC-MAIN-2021-43 | refinedweb | 1,040 | 50.73 |
On Mon, 2012-01-16 at 13:25 -0800, Andy Lutomirski wrote:> The MS_NOSUID semantics are somewhat ridiculous for selinux, I don't see how they're ridiculous.> and I'd> rather not make them match for no_new_privs. Note your patch for selinux does exactly the same thing in the NOSUIDcase and your NO_NEW_PRIVS flag. Right?- if (bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID)+ if ((bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) ||+ (bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS)) new_tsec->sid = old_tsec->sid;> AppArmor completely> ignores MS_NOSUID,Ugh...well, I guess if it doesn't store any security data associatedwith files, only with file names, then there's nothing for it to do.Like I said before though, I think SELinux is the only sane LSM.> CLONE_NEWNET seems more likely to consume significant kernel resources> than the others. This actually brings up something we need to think about - if we'reheading towards being able to do bind mounts as non-root (which isnecessary for me) we'd need limits on e.g. the number of mounts that canbe made for a given uid/cgroup.I have a picked-from-thin-air hardcoded limit of 50 in my setuid binary,but I just realized that that's 50*RLIMIT_NPROC which is kind oflarge...> I didn't have a great reason, though. Unsharing the> filesystem namespace is possibly dangerous because it could prevent an> unmount in the original namespace from taking effect everywhere.Hmmm...hadn't considered that either. So the issue here is if a serveradmin has e.g. a NFS mount and my build tool makes a new copy of themount namespace, a process may still have it busy when she goes tounmount it?> Fair enough. I may add this in v3. seccomp is an even better> solution, though :)Yeah, definitely more flexible, though realistic use of seccomp dependson someone making a nice userspace tool to compile sets of syscalls like"no networking". | https://lkml.org/lkml/2012/1/16/421 | CC-MAIN-2014-15 | refinedweb | 317 | 64.3 |
As with any API that is under active development, sometimes as an Android developer you'll come across a really neat feature which, it turns out, doesn't exist in earlier versions of the API and isn't backwards-compatible. There are a couple of ways to deal with this. One is simply to insist that users must have a given version, and to limit your app to that version. However, that can also limit the reach of your app – there are still plenty of people out there using older versions of Android. Another option, for certain features, is to make use of the Android Support Library.
The Android Support Library provides support in earlier API levels for a certain set of features introduced in later API levels. So even if ShinyFeature is only available for API 11, if it's part of the Support Library, you may be able to use it and still have your app run on older devices. (What exactly will happen on the older devices will vary depending on how the feature is supported; it's important to test it before release on as many devices and API levels as you can.) The Support Library doesn't cover all newer features; some 3.x features just can't be run on 2.x devices at all. But for those that it works for, it's a neat way of extending the reach of your app without giving up on awesome newer features.
In general, it acts as a gateway: when you import and use one of the
android.support.* packages, the interpreter will look at the actual API of the device it's running on. It will then use native built-in code if possible (if the API of your device is high enough that the feature exists in core code), or the Support Library if necessary. (There are a couple of exceptions to this; for example, the AsyncTaskLoader support class. If it matters to you, check the support class docs, and, again, test thoroughly.)
Of course, there is a cost to this. Because the library isn't part of core code, you have to bundle it with your app; automatically increasing the size of your app by a couple of megabytes. Clearly, if every app does this, users, especially on older devices, will run low on space. So do make sure that your app will otherwise work well on older devices, and that you wouldn't be better off writing a different version specifically targeting earlier levels of the API.
There are two levels of the Support Library: v4 and v13. v4 provides support for features introduced after API v4 (Android 1.6), and v13 provides support for features introduced after API level 13 (Android 3.2). Unsurprisingly, v4 is at present much more extensive than v13.
Using the Support Library in a project
First, you'll need to download the Support Library via the SDK Manager (accessed either via Eclipse or via the command line). Now open your Android project and create a
libsdirectory at the top level of the project directory. Find your SDK directory, and locate the JAR file for the support library you want, e.g.
[sdk-dir]/extras/android/support/v4/android-support-v4.jar
Copy this into the
libs directory of your project. Finally, edit your project's
AndroidManifest.xml to set the minimum and maximum SDK versions correctly:
<uses-sdk android:
The target should be the latest release (currently 17, at time of writing), and the minimum should be 4; or if you're using other features that don't exist in the Support Library or in API level 4, set the minimum to whatever is appropriate for your project.
When actually using the code, you must make certain that you're implementing the correct class; the one from the support library, not the one from the main API. Check your import statements carefully and make sure that you don't accidentally double up when you don't want to:
import android.support.v4.app.SearchViewCompat; // use this line... import android.widget.SearchView; // not this line!
Here's an example using the code from the last tutorial, replacing GestureDetector with GestureDetectorCompat:
import android.view.GestureDetector; // still need this for MyExampleGestureListener import android.support.v4.view.GestureDetectorCompat; public class GestureExampleView extends SurfaceView implements SurfaceHolder.Callback { // private class variables private GestureDetectorCompat detector; public GestureExampleView(Context context) { // see previous tutorial for rest of method detector = new GestureDetectorCompat(context, new MyExampleGestureListener()); } // rest of methods as before class MyExampleGestureListener extends GestureDetector.SimpleOnGestureListener { // This is as before; still using GestureDetector.SimpleOnGestureListener } }
Compile and run and it should work smoothly just as before; but now if you find an older device it should work on that too!
Notable classes in the Support Library
Currently, one of the most popular uses of the Support Library is to use Fragments with older versions of Android. Fragments allow you to create UI component modules which manage their own layout and lifecycle, which you can then swap in and out of your app, allowing you to create a multi-pane, dynamic UI. Fragments also make it easy to reuse components and to set them up differently in different contexts (eg phone handset vs tablet).
SearchViewCompat is also handy, allowing you to use the very useful SearchView introduced in API level 11.
ActionBar is not supported by the Support Library, but using
MenuCompat allows you to specify items to be used in the ActionBar when it is available.
Watch this space for later tutorials on Fragments and the ActionBar.
rahul Said:
i have one app of live wallpaper when i am running on some older version it shows an error first "unfortunnately stopped" but it display on live wallpaper gallrey and working fine but why its shows an error first thats my question plz help me.
Jorgesys Said:
you must have detailed information if read the message displayed in LogCat , "unfortunnately stopped" will have severak causes! | http://www.linux.com/learn/tutorials/719892-how-to-support-your-app-on-older-android-devices | CC-MAIN-2014-52 | refinedweb | 997 | 51.99 |
DEBSOURCES
Skip Quicknav
sources / nmap / 6.00-0.3+deb7u1 / xml23
/***************************************************************************
* xml.cc -- Simple library to emit XML. *
**********************: xml.cc 15135 2009-08-19 21:05:21Z david $ */
/*
This is a simple library for writing XML. It handles two main things:
keeping track of the element stack, and escaping text where necessary.
If you wanted to write this XML:
<?xml version="1.0"?>
<elem name="&10.5"></elem>
these are the functions you would call. Each one is followed by the text
it prints enclosed in ||.
xml_start_document() |<?xml version="1.0"?>|
xml_newline(); |\n|
xml_open_start_tag("elem"); |<elem|
xml_attribute("name", "&%.1f", 10.5); | name="&10.5"|
xml_close_start_tag(); |>|
xml_end_tag(); |</elem>|
The typical use is to call xml_open_start_tag, then call xml_attribute a
number of times. That is followed by xml_close_empty_tag, or else
xml_close_start_tag followed by xml_end_tag later on. You can call
xml_start_tag if there are no attributes. Whenever a start tag is opened
with xml_open_start_tag or xml_start_tag, the element name is pushed on
the tag stack. xml_end_tag pops the element stack and closes the element
it finds.
Here is a summary of all the elementary writing functions. The functions
return 0 on success and -1 on error. The terms "start" and "end" refer
to start and end tags and the start and end of comments. The terms
"open" and "close" refer only to start tags and processing instructions.
xml_start_comment() |<!--|
xml_end_comment() |-->|
xml_open_pi("elem") |<?elem|
xml_close_pi() |?>|
xml_open_start_tag("elem") |<elem|
xml_close_start_tag() |>|
xml_close_empty_tag() |/>|
xml_start_tag("elem") |<elem>|
xml_end_tag() |</elem>|
xml_attribute("name", "val") | name="val"|
xml_newline() |\n|
Additional functions are
xml_write_raw Raw unescaped output.
xml_write_escaped XML-escaped output.
xml_write_escaped_v XML-escaped output, with a va_list.
xml_start_document Writes <?xml version="1.0"?>.
xml_depth Returns the size of the element stack.
The library makes it harder but not impossible to make non-well-formed
XML. For example, you can call xml_start_tag, xml_end_tag,
xml_start_tag, xml_end_tag to create a document with two root elements.
Things like element names aren't checked to be sure they're legal. Text
given to these functions should be ASCII or UTF-8.
All writing is done with log_write(LOG_XML), so if LOG_XML hasn't been
opened, calling these functions has no effect.
*/
#include "nmap.h"
#include "output.h"
#include "xml.h"
#include <assert.h>
#include <stdarg.h>
#include <stdio.h>
#include <list>
struct xml_writer {
/* Sanity checking: Don't open a new tag while still defining
attributes for another, like "<elem1<elem2". */
bool tag_open;
/* Has the root element been started yet? If so, and if
element_stack.size() == 0, then the document is finished. */
bool root_written;
std::list<const char *> element_stack;
};
static struct xml_writer xml;
/* Escape a string for inclusion in XML. This gets <>&, "' for attribute
values, -- for inside comments, and characters with value > 0x7F. It
also gets control characters with value < 0x20 to avoid parser
normalization of \r\n\t in attribute values. If this is not desired
in some cases, we'll have to add a parameter to control this. */
static char *escape(const char *str) {
/* result is the result buffer; n + 1 is the allocated size. Double the
allocation when space runs out. */
char *result = NULL;
size_t n = 0, len;
const char *p;
int i;
i = 0;
for (p = str; *p != '\0'; p++) {
const char *repl;
char buf[32];
if (*p == '<')
repl = "<";
else if (*p == '>')
repl = ">";
else if (*p == '&')
repl = "&";
else if (*p == '"')
repl = """;
else if (*p == '\'')
repl = "'";
else if (*p == '-' && p > str && *(p - 1) == '-') {
/* Escape -- for comments. */
repl = "-";
} else if (*p < 0x20 || (unsigned char) *p > 0x7F) {
/* Escape control characters and anything outside of ASCII. We have to
emit UTF-8 and an easy way to do that is to emit ASCII. */
Snprintf(buf, sizeof(buf), "&#x%x;", (unsigned char) *p);
repl = buf;
} else {
/* Unescaped character. */
buf[0] = *p;
buf[1] = '\0';
repl = buf;
}
len = strlen(repl);
/* Double the size of the result buffer if necessary. */
if (i == 0 || i + len > n) {
n = (i + len) * 2;
result = (char *) safe_realloc(result, n + 1);
}
memcpy(result + i, repl, len);
i += len;
}
/* Trim to length. (Also does initial allocation when str is empty.) */
result = (char *) safe_realloc(result, i + 1);
result[i] = '\0';
return result;
}
/* Write data directly to the XML file with no escaping. Make sure you
know what you're doing. */
int xml_write_raw(const char *fmt, ...) {
va_list va;
char *s;
va_start(va, fmt);
alloc_vsprintf(&s, fmt, va);
va_end(va);
if (s == NULL)
return -1;
log_write(LOG_XML, "%s", s);
free(s);
return 0;
}
/* Write data directly to the XML file after escaping it. */
int xml_write_escaped(const char *fmt, ...) {
va_list va;
int n;
va_start(va, fmt);
n = xml_write_escaped_v(fmt, va);
va_end(va);
return n;
}
/* Write data directly to the XML file after escaping it. This version takes a
va_list like vprintf. */
int xml_write_escaped_v(const char *fmt, va_list va) {
char *s, *esc_s;
alloc_vsprintf(&s, fmt, va);
if (s == NULL)
return -1;
esc_s = escape(s);
free(s);
if (esc_s == NULL)
return -1;
log_write(LOG_XML, "%s", esc_s);
free(esc_s);
return 0;
}
/* Write the XML declaration: <?xml version="1.0"?>. */
int xml_start_document() {
if (xml_open_pi("xml") < 0)
return -1;
if (xml_attribute("version", "1.0") < 0)
return -1;
if (xml_close_pi() < 0)
return -1;
if (xml_newline() < 0)
return -1;
return 0;
}
int xml_start_comment() {
log_write(LOG_XML, "<!--");
return 0;
}
int xml_end_comment() {
log_write(LOG_XML, "-->");
return 0;
}
int xml_open_pi(const char *name) {
assert(!xml.tag_open);
log_write(LOG_XML, "<?%s", name);
xml.tag_open = true;
return 0;
}
int xml_close_pi() {
assert(xml.tag_open);
log_write(LOG_XML, "?>");
xml.tag_open = false;
return 0;
}
/* Open a start tag, like "<name". The tag must be later closed with
xml_close_start_tag or xml_close_empty_tag. Usually the tag is closed
after writing some attributes. */
int xml_open_start_tag(const char *name) {
assert(!xml.tag_open);
log_write(LOG_XML, "<%s", name);
xml.element_stack.push_back(name);
xml.tag_open = true;
xml.root_written = true;
return 0;
}
int xml_close_start_tag() {
assert(xml.tag_open);
log_write(LOG_XML, ">");
xml.tag_open = false;
return 0;
}
/* Close an empty-element tag. It should have been opened with
xml_open_start_tag. */
int xml_close_empty_tag() {
assert(xml.tag_open);
assert(!xml.element_stack.empty());
xml.element_stack.pop_back();
log_write(LOG_XML, "/>");
xml.tag_open = false;
return 0;
}
int xml_start_tag(const char *name) {
if (xml_open_start_tag(name) < 0)
return -1;
if (xml_close_start_tag() < 0)
return -1;
return 0;
}
/* Write an end tag for the element at the top of the element stack. */
int xml_end_tag() {
const char *name;
assert(!xml.tag_open);
assert(!xml.element_stack.empty());
name = xml.element_stack.back();
xml.element_stack.pop_back();
log_write(LOG_XML, "</%s>", name);
return 0;
}
/* Write an attribute. The only place this makes sense is between
xml_open_start_tag and either xml_close_start_tag or
xml_close_empty_tag. */
int xml_attribute(const char *name, const char *fmt, ...) {
va_list va;
char *val, *esc_val;
assert(xml.tag_open);
va_start(va, fmt);
alloc_vsprintf(&val, fmt, va);
va_end(va);
if (val == NULL)
return -1;
esc_val = escape(val);
free(val);
if (esc_val == NULL)
return -1;
log_write(LOG_XML, " %s=\"%s\"", name, esc_val);
free(esc_val);
return 0;
}
int xml_newline() {
log_write(LOG_XML, "\n");
return 0;
}
/* Return the size of the element stack. */
int xml_depth() {
return xml.element_stack.size();
}
/* Return true iff a root element has been started. */
bool xml_tag_open() {
return xml.tag_open;
}
/* Return true iff a root element has been started. */
bool xml_root_written() {
return xml.root_written;
} | https://sources.debian.org/src/nmap/6.00-0.3+deb7u1/xml.cc/ | CC-MAIN-2021-04 | refinedweb | 1,148 | 59.9 |
If you are new to JavaScript or it has only been a minor part of your development effort until recently, you may be feeling frustrated. All languages have their quirks - but the paradigm shift from strongly typed server-side languages to JavaScript can feel especially confusing at times. I've been there! A few years ago, when I was thrust into full time JavaScript development, there were many things I wish I'd known going into it. In this article, I'll share a few of these quirks in hopes that I might spare you some of the headaches I endured. This isn't an exhaustive list – just a sampling – but hopefully it will shed some light on the language as well as show how powerful it can be once you get past these kinds of hurdles.
typeofvs
Object.prototype.toString
Coming from C#, I was well familiar with the
== comparison operator. Value types (& strings) are either equal (have the same value) or they aren't. Reference types are either equal - as in pointing to the same reference - or NOT. (Let's just pretend you're not overloading the
== operator, or implementing your own
Equals and
GetHashCode methods.) I was surprised to learn that JavaScript has two equality operators:
== and
===. Most of the code I'd initially seen used
==, so I followed suit and wasn't aware of what JavaScript was doing for me as I ran code like this:
So what dark magic is this? How can the integer 1 equal the string "1"?So what dark magic is this? How can the integer 1 equal the string "1"?
var x = 1; if(x == "1") { console.log("YAY! They're equal!"); }
In JavaScript, there's equality (
==) and then there's strict equality (
===). The equality comparison operator will coerce the operands to the same type and then perform a strict equality comparison. So in the above example, the string "1" is being converted, behind the scenes, to the integer 1, and then compared to our variable
x.
Strict equality doesn't coerce the types for you. If the operands aren't of the same type (as in the integer 1 and string "1"), then they aren't equal:.
var x = 1; // with strict equality, the types must be the *same* for it to be true if(x === "1") { console.log("Sadly, I'll never write this to the console"); } if(x === 1) { console.log("YES! Strict Equality FTW.") }
Depending on what other languages you hail from, you probably weren't too surprised to see these forms of accessing a property on an object and accessing an element in an array in JavaScript:
However, did you know it's possible to use bracket notation to reference object members as well? For example:However, did you know it's possible to use bracket notation to reference object members as well? For example:
// getting the "firstName" value from the person object: var name = person.firstName; // getting the 3rd element in an array: var theOneWeWant = myArray[2]; // remember, 0-based index
Why would this be useful? While you'll probably use dot notation the majority of the time, there are a few instances where bracket notation makes certain approaches possible that couldn't be done otherwise. For example, I will often refactor largeWhy would this be useful? While you'll probably use dot notation the majority of the time, there are a few instances where bracket notation makes certain approaches possible that couldn't be done otherwise. For example, I will often refactor large
var name = person["firstName"];
switchstatements into a dispatch table, so that something like this:
Can be transformed into something like this:Can be transformed into something like this:
var doSomething = function(doWhat) { switch(doWhat) { case "doThisThing": // more code... break; case "doThatThing": // more code... break; case "doThisOtherThing": // more code.... break; // additional cases here, etc. default: // default behavior break; } }
There's nothing inherently wrong with using aThere's nothing inherently wrong with using a
var thingsWeCanDo = { doThisThing : function() { /* behavior */ }, doThatThing : function() { /* behavior */ }, doThisOtherThing : function() { /* behavior */ }, default : function() { /* behavior */ } }; var doSomething = function(doWhat) { var thingToDo = thingsWeCanDo.hasOwnProperty(doWhat) ? doWhat : "default" thingsWeCanDo[thingToDo](); }
switch(and in many cases, if you're iterating and performance is a huge concern, a
switchmay outperform the dispatch table). However, dispatch tables offer a nice way to organize and extend behavior, and bracket notation makes it possible by allowing you to reference a property dynamically at runtime.
There have been a number of great blog posts about properly understanding the "
this context" in JavaScript (and I'll link to several at the bottom of this post), but it definitely makes my list of "things I wish I'd known" right away. It's really not difficult to look at code and confidently know what
this is at any point - you just have to learn a couple of rules. Unfortunately, many explanations I read about it early on just added to my confusion. As a result, I've tried to simplify the explanation for developers new to JavaScript.
By default, until there's a reason for the execution context to change,
this refers to the global object. In the browser, that would be the
window object (or
global in node.js).
thisin Methods
When you have an object with a member that is a function, invoking that method from the parent object makes
this the parent object. For example:
var marty = { firstName: "Marty", lastName: "McFly", timeTravel: function(year) { console.log(this.firstName + " " + this.lastName + " is time traveling to " + year); } } marty.timeTravel(1955); // Marty McFly is time traveling to 1955
You might already know that you can take the
marty object's
timeTravel method and create a new reference to it from another object. This is actually quite a powerful feature of JavaScript - enabling us to apply behavior (functions) to more than one target instance:
So - what happens if we invokeSo - what happens if we invoke
var doc = { firstName: "Emmett", lastName: "Brown", } doc.timeTravel = marty.timeTravel;
doc.timeTravel(1885)?
Again - deep dark magic. Well, not really. Remember, when you're invoking a method, theAgain - deep dark magic. Well, not really. Remember, when you're invoking a method, the
doc.timeTravel(1885); // Emmett Brown is time traveling to 1885
thiscontext is the parent object it's being invoked from. Hold on to your DeLoreans, though, cause it's about the get heavy.
What happens when we save a reference to the
marty.TimeTravel method and invoke our saved reference? Let's look:
Why "undefined undefined"?! Why not "Marty McFly"?Why "undefined undefined"?! Why not "Marty McFly"?
var getBackInTime = marty.timeTravel; getBackInTime(2014); // undefined undefined is time traveling to 2014
Let's ask an important question: What's the parent/owning object when we invoke our
getBackInTime function? While the
getBackInTime function will technically exist on the window, we're invoking it as a function, not a method of an object. When we invoke a function like this - with no owning object - the
this context will be the global object. David Shariff has a great way of describing this:
Whenever a function is called, we must look at the immediate left side of the brackets / parentheses “()”. If on the left side of the parentheses we can see a reference, then the value of “this” passed to the function call is exactly of which that object belongs to, otherwise it is the global object.
Since
getBackInTime's
this context is the
window - which does not have
firstName and
lastName properties – that explains why we see "undefined undefined".
So we know that invoking a function directly – no owning object – results in the
this context being the global object. But I also said that I knew our
getBackInTime function would exist on the window. How do I know this? Well, unless I've wrapped the
getBackInTime in a different scope (which we'll investigate when we discuss immediately-invoked-function-expressions), then any vars I declare get attached to the window. Here's proof from Chrome's console:
This is the perfect time to talk about one of the main areas that
this trips up developers: subscribing event handlers.
thisin Methods When Invoked Asynchronously
So, let's pretend we want to invoke our
marty.timeTravel method when someone clicks a button:
With the above code, when the user clicks the button, we'll see "undefined undefined is time traveling to [object MouseEvent]". WAT?! Well - the first, and most obvious, issue is that we're not providing theWith the above code, when the user clicks the button, we'll see "undefined undefined is time traveling to [object MouseEvent]". WAT?! Well - the first, and most obvious, issue is that we're not providing the
var flux = document.getElementById("flux-capacitor"); flux.addEventListener("click", marty.timeTravel);
yearargument to our
timeTravelmethod. Instead, we subscribed the method directly as an event handler, and the
MouseEventargument is being passed to the event handler as the first arg. That's easy to fix, but the real issue is that we're seeing "undefined undefined" again. Don't despair - you already know why this is the case (even if you don't realize it yet). Let's make a change to our
timeTravelfunction to log whatever
thisis to help give us some insight:
Now - when we click the button, we should see something similar to the following output in our browser console:Now - when we click the button, we should see something similar to the following output in our browser console:
marty.timeTravel = function(year) { console.log(this.firstName + " " + this.lastName + " is time traveling to " + year); console.log(this); };
Our second
console.log shows us the
this context when the method was invoked – and it's the actual button element which we subscribed to. Does this surprise you? Just like earlier - when we assigned the
marty.timeTravel reference to our
getBackInTime var - a reference to
marty.timeTravel was saved as our event handler and is being invoked, but not from the "owning"
marty object. In this case, it's being invoked asynchronously by the underlying event emitting implementation of the button element instance.
So - is it possible make
this what we want it to be? Absolutely! In this case, the solution is deceptively simple. Instead of subscribing
marty.timeTravel directly as the event handler, we can use an anonymous function as our event handler, and call marty.timeTravel from within it. This also provides the opportunity to fix our issue with the missing
year parameter.
Clicking the button now will result in something similar to this output on the console:Clicking the button now will result in something similar to this output on the console:
flux.addEventListener("click", function(e) { marty.timeTravel(someYearValue); });
Success! But why did this work? Think of how we're invoking the
timeTravel method. In our first button click example, we subscribed the method reference itself as the event handler, so it was not being called from the parent object of
marty. In the second example, it's our anonymous function that will have a
this of the button element, and when we invoke
marty.timeTravel, we're calling it from the
marty parent object, and the
this will be
marty.
thisInside Constructor Functions.
When you use a constructor function to create a new instance of an object, the
this value inside the function is the new object that's being created. For example:
var TimeTraveler = function(fName, lName) { this.firstName = fName; this.lastName = lName; // Constructor functions return the // newly created object for us unless // we specifically return something else }; var marty = new TimeTraveler("Marty", "McFly"); console.log(marty.firstName + " " + marty.lastName); // Marty McFly
You might already suspect, given the above examples, that there might be some language-level features that would allow us to invoke a function and tell it at runtime what
this should be. You'd be right. The
call and
apply methods that exist on the
Function prototype both allow us to invoke a function and pass in a
this value.
call's method signature takes the
this arg, followed by the parameters the function being invoked would take as separate arguments:
`apply` takes `this` as the first arg, followed by an array of the remaining arguments:`apply` takes `this` as the first arg, followed by an array of the remaining arguments:
someFn.call(this, arg1, arg2, arg3);:
someFn.apply(this, [arg1, arg2, arg3]);
Now it's possible to send Einstein on his way:Now it's possible to send Einstein on his way:
doc.timeTravelFor = function(instance, year) { this.timeTravel.call(instance, year); // alternate syntax if you used apply would be // this.timeTravel.apply(instance, [year]); };
I know this is a contrived example, but it's enough to give you a glimpse of the power of applying functions to other objects.I know this is a contrived example, but it's enough to give you a glimpse of the power of applying functions to other objects.
var einstein = { firstName: "Einstein", lastName: "(the dog)" }; doc.timeTravelFor(einstein, 1985); // Einstein (the dog) is time traveling to 1985
There's still one other possibility we haven't explored. Let's give our
marty instance a
goHome method that's simply a shortcut to calling
this.timeTravel(1985):
However, we know that if we subscribeHowever, we know that if we subscribe
marty.goHome = function() { this.timeTravel(1985); }
marty.goHomeas the event handler to our button's click event, that
thiswill be the button - and unfortunately buttons don't have a
timeTravelmethod. We could use our approach from earlier – where an anonymous function was the event handler, and it invoked the method on the
martyinstance – but we have another option, the
bindfunction:
TheThe
flux.addEventListener("click", marty.goHome.bind(marty));
bindfunction actually returns a new function, with the
thisvalue of the new function set to what you provide as the argument. If you're supporting older browsers (less than IE9, for example), then you'll need to shim the
bindfunction (or, if you're using jQuery, you can use
$.proxy; both underscore & lodash provide
_.bind).
One important thing to remember is that if you use
bindon a prototype method, it creates an instance-level method, which bypasses the advantages of having methods on the prototype. It's not wrong, just something to be aware of. I write more about this particular issue here.
There are two main ways you will typically see functions defined in JavaScript (though ES6 will introduce another)): function declarations and function expressions.
Function Declarations do not require a
var keyword. In fact, as Angus Croll says: "It’s helpful to think of them as siblings of Variable Declarations." For example:
The function nameThe function name
function timeTravel(year) { console.log(this.firstName + " " + this.lastName + " is time traveling to " + year); }
timeTravelin the above example is visible not only inside the scope it was declared, but inside the function itself as well (this is especially useful for recursive function calls). Function declarations are, by nature, named functions. In other words, the above function's
nameproperty is
timeTravel.
Function Expressions define a function and assign it to a variable. They typically look like this:
It's possible to name function expressions as well - though, unlike function declarations, a named function expression's name is only accessible inside its scope:It's possible to name function expressions as well - though, unlike function declarations, a named function expression's name is only accessible inside its scope:
var someFn = function() { console.log("I like to express myself..."); };
var someFn = function iHazName() { console.log("I like to express myself..."); if(needsMoreExpressing) { iHazName(); // the function's name can be used here } }; // you can call someFn() here, but not iHazName() someFn();
Discussing function expressions and declarations wouldn't be complete without at least mentioning "hoisting" - where function and variable declarations are moved to the top of the containing scope by the interpreter. We're not going to cover hoisting here, but be sure to read two great explanations by Ben Cherry and Angus Croll.
Based on what we've just discussed, you might have guessed that an "anonymous" function is a function without a name. Most JavaScript developers would quickly recognize the second argument below as an anonymous function:
However, it's also true that ourHowever, it's also true that our
someElement.addEventListener("click", function(e) { // I'm anonymous! });
marty.timeTravelmethod is anonymous as well:
Since function declarations must have a name, only function expressions can be anonymous.Since function declarations must have a name, only function expressions can be anonymous.
var marty = { firstName: "Marty", lastName: "McFly", timeTravel: function(year) { console.log(this.firstName + " " + this.lastName + " is time traveling to " + year); } }
And since we're talking about function expressions, one thing I wish I'd known right away: the Immediately Invoked Function Expression (IIFE). There are a number of good posts on IIFEs (I'll list a few at the end), but in a nutshell, it's a function expression that is not assigned to a variable to be executed later, it's executed immediately. It might help to see this take shape in a browser console.
That's not valid JavaScript syntax - it's a function declaration missing its name. However, to make it an expression, we just need to surround it with parentheses:
Making this an expression immediately returns our anonymous function to the console (remember, we're not assigning it, but since it's an expression we get its value back). So - we have the "function expression" part of "immediately invoked function expression". To get the "immediately invoked" aspect, we call what the expression returns by adding another set of parentheses after the expression (just like we'd invoke any other function):
"But wait, Jim! I think I've seen this before with the invocation parentheses inside the expression parentheses". Indeed you probably have – it's perfectly legal syntax (and is well known to be Douglas Crockford's preferred syntax):
Either approach on where to put the invocation parentheses is usable, however I highly recommend reading one of the best explanations ever on why & when it could matter.
Ok, great - now we know what an IIFE is - why is it useful?
It helps us control scope - an essential part of any JavaScript endeavor! The
marty instance we've looked at earlier was created in the global scope. This means that the window (assuming we're in a browser), will have a
marty property. If we write all of our JavaScript code this way, we'll quickly end up with a metric ton of vars declared in the global scope, polluting the window with our app's code. Even in the best case scenarios, it's bad practice to leak that many details into the global scope, but what happens when someone names a variable the same name as an already existing property on the window? It gets overwritten!
For example, if your favorite "Amelia Earhart" fan site declares a
navigator variable in the global scope, this is a before and after look at what happens:
OOPS!
Obviously - polluting the global scope is bad. JavaScript utilizes function scope (not block scope, if you're coming from C# or Java, this is important!), so the way to keep our code from polluting the global scope is to create a new scope, and we can use an IIFE to do this since its contents will be inside its own function scope. In the example below, I'm showing you the
window.navigator value in the console, and then I create an IIFE to wrap the behavior and data specific to Amelia Earhart. This IIFE happens to return an object that will serve as our "application namespace". Inside the IIFE I declare a
navigator variable to show that it will not overwrite the window.navigator value.
As an added bonus, the IIFE we created above is the beginnings of a module pattern in JavaScript. I'll include some links at the end so you can explore module patterns further.
Eventually, you'll probably find yourself in a situation where you need to inspect the type of a value passed into a function, or something similar. The
typeof operator might seem the obvious choice, however, it's not terribly helpful. For example, what happens when we call
typeof on an object, an array, a string and a regular expression?
Well - at least we can tell our strings apart from objects, arrays and regular expressions, right? Thankfully, we can take a different approach to get more accurate information about type we're inspecting. We'll use the
Object.prototype.toString function and apply our knowledge of the
call method from earlier:
Why are we using the
toString on
Object.prototype? Because it's possible for 3rd party libraries, as well as your own code, to override the
toString method with an instance method. By going to the
Object.prototype, we can force the original
toString behavior on an instance.
If you know what
typeof will return and you don't need to check beyond what it will give you (for example, you just need to know if something is a string or not), then using
typeof is perfectly fine. However, if you need to tell arrays from objects, regexes from objects, etc., use
Object.prototype.toString.
I've benefitted tremendously from the insights of other JavaScript developers, so please check these links out and give these people a pat on the back for teaching the rest of us!
Jim Cowart is an architect, developer, open source author, and overall web/hybrid mobile development geek. He is an active speaker and writer, with a passion for elevating developer knowledge of patterns and helpful frameworks. | https://www.telerik.com/blogs/seven-javascript-quirks-i-wish-id-known-about | CC-MAIN-2021-39 | refinedweb | 3,607 | 54.02 |
Before reading this article, please go through the article's link, given below-
Reading this article, you can learn how to create column chart in Universal Windows apps development with XAML and Visual C#.
The following important tools are required for developing UWP-
Now, we can discuss step by step app development. Step 1- Open Visual Studio 2015 -> Start -> New Project-> Select Universal (under Visual C#->Windows)-> Blank app -> Give the suitable name for your app (ColumnUWP)->OK. Choose the Target and minimum platform version for your Windows Universal Application will support. Afterwards, the project creates App.xaml and MainPage.xaml. Step 2- Open (double click) the file MainPage.xaml in the Solution Explorer and add the WINRT XAML Tool Kit Control reference in the project.
For adding the reference, right click on your project (ColumnUWP) and select Manage NuGet Packages.
Choose Browse and search WinRTXamlToolkit.Controls.DataVisualization.UWP, select the package and install it.
Add TextBlock control, change the name and text property. Step 3 - Add the charting namespace and Line series control code in the XAML View. Step 4 - Add the name space, given below and class in MainPage.xaml.cs. Step 5- Add the Method GetChartData() in MainPage.xaml.cs. This code is used for setting the values for your ColumnChart and call the GetChartData() method in the constructor method (MainPage() method) in MainPage.xaml.cs. Step 6 - Deploy your app in Local Machine and the output of the ColumnUWP app is given below- Summary
Now, you have successfully created and tested the column chart in Visual C# - UWP environment.
View All | http://www.c-sharpcorner.com/article/creating-column-chart-in-universal-application-development-with-xaml-and-c-sharp2/ | CC-MAIN-2017-51 | refinedweb | 265 | 56.55 |
Having trouble getting this to work correctly. Not sure what I am doing wrong. I think I have it set up fairly correctly.
I also want to put a modulus in so that I only have to use even or odd to determine which player is currently rolling.
The design for this assignment is really quite simple. It asks player 1 to roll or to hold and they are only rolling 1 die. If they roll a 1 their turnScore goes to 0 and it is the other players turn. This is also a human so two people can play. No computer player in this version at all.
If you guy see any errors let me know. We just started learning methods and classes so something could be off. thanks!
Here is the code:
Code:/*Davin Kohnz * CS 140 Program 3 * Summer 2013 * 6/26/13 */ package program.pkg3; import javax.swing.JOptionPane; import java.util.Random; public class Program3 { public static final int DIE_SIDES=1; public static final int TARGET_SCORE=100; public static Random rand= new Random(); public static final int playerOneScore=0, playerTwoScore=0; public static void main(String[] args) { int die= rand.nextInt(DIE_SIDES)+1; int scoreForRound= 0; JOptionPane.showMessageDialog(null, "Welcome to Pig!"); while((playerOneScore < TARGET_SCORE) || playerTwoScore <TARGET_SCORE){ int reply=JOptionPane.showConfirmDialog(null, "Player 1:" + playerOneScore + "\n" +"Player 2:" + playerTwoScore +"\n" + "Player 1's turn\nClick \"Yes\" to roll, \"No\" to hold\n" + "Score for round="+ scoreForRound, "Pig!",JOptionPane.YES_NO_OPTION); if(reply==JOptionPane.YES_OPTION){ JOptionPane.showMessageDialog(null, "You Rolled a: "+ die); }else if(reply==JOptionPane.NO_OPTION){ JOptionPane.showMessageDialog(null, "You held with round score of: " + scoreForRound); }else if(die==1) { JOptionPane.showMessageDialog(null, "You Rolled a: "+ die); JOptionPane.showMessageDialog(null, "You pigged out!"); } } } | http://forums.devshed.com/java-help-9/help-pig-dice-game-please-947715.html | CC-MAIN-2016-50 | refinedweb | 287 | 60.72 |
The.
- for him a standard is something that is established, accepted and dominant
- following this definition only some of the Java EE APIs like the Servlet specification can be considered as a standard, because they are widely applied in the Java production technology landscape
- he looks in the past saying that there was little investment protection in some of the Java EE Standard APIs, like for example the EJB specification (massive API changes over the last decade)
- he also claims that JPA and JSF in their 1.0 Versions weren’t sufficient to fulfill the technical requirements in large scale enterprise development projects
- he looks at CDI as another young standard that needs to proove a long term stability before it can be considered as a standard IOC mechanism in Java enterprise applications
- consequently, his conclusion for the moment is: Sping and Hibernate are still “the de-facto” standards for Java enterprise development
The lessons of these phase make their way into the subsequent API versions and therefore – for a while – the API gets instable. Functionality is added, APIs are reduced adopted, simplified and so on. This turbulent and uncertain storming phase leads to increasing migration costs if customers want to follow the standard. Even if the 2.0 Version is downwoards compatible, using the nice new features instead of the many workarounds means refactoring efforts. If the API is not downwoards compatible API changes enforce migration effort, there is no choice. After a while however the API gets mature and these refactoring costs decrease, the API entered the phase of stabalization.
A mature API has constantly low costs of migration and refactoring because no fundamental changes are applied. After a while a technology is not used anymore, because it’s replaced by other innovative APIs. The technology is dead – there is no investment anymore in these APIs, the community stopped the project. Image 1 shows the idealized API life cycle.
This wrapper is used as API to build the business applications. The wrapper API could be the Spring Framework, or your own set of custom framework APIs. This way it’s easier and more efficient to make required changes when we moove to new application server releases or Java EE versions respectively. The wrapper absorbes Java EE API changes and saves us from the burden to change 50 applications. Instead, we do the changes once in a central place. Our development groups are not affected by application server upgrades. OE suppliers and the Java Communication Process (JCP) do not influence our decisions and efforts.
I am saying your own set because you may use a mixed technology stack (Image 2): some mature Java EE APIs (e.g. Servlets, JPA 2.0), some de-facto standards (e.g. Spring IOC) and some proprietary custom APIs that you have developed as wrappers around Newby Java EE APIs. The most important thing is that those APIs support downwoards compatability for your production applications. You have to find a set of APIs that saves you from large migration efforts when you want to move to new Java EE application server releases.
The answer is: because someone had the courage to use it and others (including Java EE) followed. A “standard” is what a large part of the community uses to run large applications in production. A standard is not necessarily a Java EE standard. In the past Java EE standards followed the de-facto standard frameworks (e.g. Hibernate, Spring). There is a high probability that any new technologies first reach a certain level of maturity in open source frameworks. Then they will become Java EE standards. That’s because the vast majority of Java technology innovation has its source in the community – at least for the last decade.
Reference: Java EE 6 vs. Spring Framework: A technology decision making process from our JCG partner Niklas.
Related Articles :
Conclusion:
Go with Spring :-)
Conclusion: Node.js
I completely disagree with the statement ”
CDI, for example, is not mature in its JSR 299/330 version.” CDI is put in production at a lot of customer shops and has evolved to be the premier specification of Java EE6. CDI has evolved from the best of Spring IOC, Guice and Seam. You claim to be burnt in the EJB era which was 8-10 years ago. EJB in its latest incarnation is nothing like it was in the EJB-2.x days.
The argument here is plain simple proprietary vendor lock-in with Spring vs Standards and backward compatibility with Java EE 6, everything else is FUD.
I think there are several good reasons to use spring:
a) Configuration is server independent
b) Is widely used
c) You may run it on a Tomcat Server
d) You choose which implementation version of the standard’s to use. May be JPA is an standard, but vendor specific features are sometimes very useful. Why let the server choose the implementation.
I am sorry if I am mistaken,what I get out of this article and more by comments out here is that”new api’s and specification” should not be entertained by JSR because there is already something production-ready available in the market.Some ‘team leaders’ and ‘project managers’ with monolithic mindset are making the life of developers hell,by seizing the growth of technologies.Its my sincere request to them,please learn something from your ‘microsoft’ and ‘php’ counterparts,embrace new api’s for heaven’s sake. Thanks to all.
The “Java EE API Lifecycle” applies just as much to Spring as any other technology. I doubt there is anyone here who is still using the old school, massive spring XML configs when they could be using annotation driven Spring with autowiring, or the new simplified spring XML and component namespaces.
Spring has evolved just as much as Java EE, and I’d argue that it as roughly the same re-factoring cost of a JEE app when switching between major versions (With the exception of EJB1/2… it just sucked.).
Thanks for the article – some great pearls of wisdom in there.
Polite feedback: Consider using a spell checker before publishing :-) | http://www.javacodegeeks.com/2012/01/java-ee-6-vs-spring-framework.html | CC-MAIN-2014-52 | refinedweb | 1,024 | 60.95 |
How to Control Your Servo with the Arduino
By using a potentiometer (or any analog sensor), it’s possible to directly control your servo with the Arduino in the same way that you’d control a mechanical claw at the arcades.
The Knob sketch
This example shows you how you can easily use a potentiometer to move your servo to a specific degree.
You need:
An Arduino Uno
A breadboard
A servo
A 10k ohm variable resistor
Jump wires
The servo is wired exactly as in the Sweep example, but this time you need extra connections to 5V and GND for the potentiometer, so you must use a breadboard to provide the extra pins. Connect the 5V and GND pins on the Arduino to the positive (+) and negative (-) rows on the breadboard.
Connect the servo to the breadboard using either a row of three header pins or three jump wires. Connect the red socket to the 5V row, the black/brown socket to the GND row, and the white/yellow socket to pin 9 on the Arduino.
Find a space on the breadboard for the potentiometer. Connect the center pin to pin A0 on the Arduino and the remaining pins to 5V on one side and GND on the other.
After you have built the circuit, open the sketch by choosing File→Examples→Servo→Knob. The code for the sketch is as follows:
// }
You may notice that there are a few discrepancies between the comments and the code. When referring to the range of degrees to move the servo, the sketch mentions both 0 to 179 and 0 to 180. With all Arduino tutorials, it’s best to assume that they’re works in progress and may not always be accurate.
The correct range is 0 to 179, which gives you 180 values. Counting from zero is referred to as zero indexing and is a common occurrence in Arduino, as you may have noticed by this point.
After you have found the sketch, press the Compile button to check the code. The compiler should highlight any syntax errors in the message box, which lights up red when they are discovered.
If the sketch compiles correctly, click Upload to upload the sketch to your board. When it is done uploading, your servo should turn as you turn your potentiometer.
If that isn’t what happens, you should double check your wiring:
Make sure that you’re using pin 9 to connect the data (white/yellow) line to the servo.
Check your connections to the potentiometer and make sure that the center pin is connected to Analog pin 0.
Check the connections on the breadboard. If the jump wires or components are not connected using the correct rows in the breadboard, they will not work.
The Knob sketch breakdown
In the declarations, the servo library, Servo.h, and a new servo object are named. The analog input pin is declared with a value of 0, showing that you are using Analog 0.
You may have noticed that the pin is numbered as 0, not A0 as in other examples. Either is fine, because A0 is just an alias of 0, as A1 is of 1, and so on. Using A0 is good for clarity, but optional.
There is one last variable to store the value of the reading, which will become the output.
#include <Servo.h> Servo myservo; // create servo object to control a servo int potpin = 0; // analog pin used to connect the potentiometer int val; // variable to read the value from the analog pin
In setup, the only item to define is myservo, which is using pin 9.
void setup() { myservo.attach(9); // attaches the servo on pin 9 to the servo object }
Rather than use two separate variables for input and output, this sketch simply uses one. First, val is used to store the raw sensor data, a value from 0 to 1023. This value is then processed using the map function to scale its range to that of the servo: 0 to 179.
This value is then written to the servo using myservo.write. There is also a 15 millisecond delay to reach that location. Then the loop repeats and updates its position as necessary. }
With this simple addition to the circuit, it’s possible to control a servo with any sort of input. In this example, the code uses an analog input, but with a few changes it could just as easily use a digital input. | http://www.dummies.com/how-to/content/how-to-control-your-servo-with-the-arduino.navId-817145.html | CC-MAIN-2013-48 | refinedweb | 749 | 69.41 |
18 Jun 2012 08:11
nearest() for GRanges
Oleg Mayba <mayba.oleg@...>
2012-06-18 06:11:22 GMT
2012-06-18 06:11:22 GMT
Hi, I just noticed that a piece of logic I was relying on with GRanges before does not seem to work anymore. Namely, I expect the behavior of nearest() with a single GRanges object as an argument to be the same as that of IRanges (example below), but it's not anymore. I expect nearest(GR1) NOT to behave trivially but to return the closest range OTHER than the range itself. I could swear that was the case before, but isn't any longer: > z=IRanges(start=c(1,5,10), end=c(2,7,12)) > z IRanges of length 3 start end width [1] 1 2 2 [2] 5 7 3 [3] 10 12 3 > nearest(z) [1] 2 1 2 > > z=GRanges(seqnames=rep('chr1',3),ranges=IRanges(start=c(1,5,10), end=c(2,7,12))) > z GRanges with 3 ranges and 0 elementMetadata cols: seqnames ranges strand <Rle> <IRanges> <Rle> [1] chr1 [ 1, 2] * [2] chr1 [ 5, 7] * [3] chr1 [10, 12] * --- seqlengths: chr1 NA > nearest(z) [1] 1 2 3 > > sessionInfo() R version 2.15.0 (2012-03-30)] datasets utils grDevices graphics stats methods base other attached packages: [1] GenomicRanges_1.8.6 IRanges_1.14.3 BiocGenerics_0.2.0 loaded via a namespace (and not attached): [1] stats4_2.15.0 > I want the IRanges behavior and not what seems currently to be the GRanges behavior, since I have some code that depends on it. Is there a quick way to make nearest() do that for me again? Thanks! Oleg. [[alternative HTML version deleted]] _______________________________________________ Bioconductor mailing list Bioconductor@... Search the archives: | http://permalink.gmane.org/gmane.science.biology.informatics.conductor/41786 | CC-MAIN-2013-20 | refinedweb | 293 | 66.27 |
Results 1 to 1 of 1
Thread: Mactel + Soulseek
- Mactel + Soulseek
My father recently got a 20" Intel iMac. Now, I know that installing Nicotine for Soulseek probably isn't possible, but I've tried the nicotine.app anyway. This is the error I get in console while loading it. The app icon opens in the dock and then just closes a few seconds later. I've installed X11 as directions tell you to.
Could not init font path element /usr/X11R6/lib/X11/fonts/CID/, removing from list!
bash: no job control in this shell
Nicotine supports "psyco", an inline optimizer for python
code, you can get it at
Traceback (most recent call last):
File "sw/bin/nicotine", line 142, in ?
result = checkenv()
File "sw/bin/nicotine", line 68, in checkenv
import gtk
File "/Users/vasi/Hacking/fink-apps/Nicotine/Nicotine.app/Contents/Resources/sw/lib/python2.3/site-packages/gtk-2.0/gtk/__init__.py", line 33, in ?
ImportError: dlopen(/Applications/Nicotine.app/Contents/Resources/sw/lib/python2.3/site-packages/gtk-2.0/gobject.so, 2): no suitable image found. Did find:
/Applications/Nicotine.app/Contents/Resources/sw/lib/python2.3/site-packages/gtk-2.0/gobject.so: mach-o, but wrong architecture
Is this problem specific to Intel Macs or just a general problem? Can it be fixed so that this program can run? The Soulseex client is horrible and hardly an alternative."say, can you hear that? it's the sound of the void."
Thread Information
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
Similar Threads
- Replies: 1Last Post: 01-25-2010, 06:45 PM
Mac Soulseek/DC++ ?By Pheezin in forum Schweb's LoungeReplies: 1Last Post: 11-09-2006, 04:24 AM
Mactel Desktops?By VastDeathmaster in forum Apple Rumors and ReportsReplies: 25Last Post: 06-06-2006, 12:39 AM
soulseek for macBy vic one in forum macOS - Apps and GamesReplies: 4Last Post: 09-10-2005, 09:36 PM
anybody done soulseek?!By analyze in forum Archival ForumReplies: 3Last Post: 04-20-2005, 12:02 PM | http://www.mac-forums.com/forums/switcher-hangout/33982-mactel-soulseek.html | CC-MAIN-2018-22 | refinedweb | 351 | 58.69 |
Introduction: Color Sphere With Arduino
Greetings technology and Arduino enthusiast. I wanted to share a project I have been working on using a Neopixel LED strip and a ADXL335 3-axis accelerometer. I would consider this project to be medium level and you will want to be familiar with some basics of coding with Arduino and C++. That being said I broke this Instructable down into parts where you could learn just the ADXL335 or just the Neopixel Strip individually. There is also a lot of room to expand on this project to make it way more advanced for experienced users. Once you start mixing and matching different Arduino projects and editing other peoples code, you see how much deeper your coding experience will be. Maybe you want to use the accelerometer for a different purpose, or you see another clever way to use the LED Strip. Please feel free to pick and choose where you would like to grow and learn.
Step 1: Parts and Equipment
Arduino Uno - $12.00 (reusable)
The brain behind this operation will be an Arduino Uno. There are many different models to choose from but most have the same basics. you will want to get yourself one and the appropriate USB cable to get started with for this project and future ones.
ADXL335 Accelerometer - $15.00 (reusable)
The first major component of this project is the 3-axis accelerometer. It is a small piece and it may require you to solder on the pins at first. There are also accelerometers with the pins already soldered in place so look around for what you would like. This can also be used for other projects as you build your collection of sensors and components.
Addressable RGB LED Strip - $7.00 (30 LED reusable)
LED Strips are very popular and can be found all over in different shapes and sizes. I previously bought a strip containing 144 LED's on one strip and I cut it down into lengths I need for each project. The cost changes depending on how long the strip is, how many LED's there are per meter of strip, and other features like Water proofing. I have 30 LED's crammed into this hamster ball which we will cover in the Neopixel section.
Batterey Pack (4 x AA)- $5.00 (reusable)
This will be covered later when we talk about powering the LED strip. It will depend on how many LED's you plan on powering. I wanted a finish product that could roll around and be portable. If you are only making a prototype to practice and learn, you could get away with out an actual battery pack. Your project would jest need to be connected to a different power source like a computer or power adapter.
Jumper Wires (3 minimum)- $6.00 (multipack, semi-reusable)
Jumper wires are basically reusable wires for Arduino and bredboards. It allows you to plug and unplug the wires simply as a opposed to permanently soldering them. The pack above contains male/male, male/female and female/female wires (males have and outty, girls have and inny). This is more than you need but they are good to have and do tend to break eventually. The LED strip needs 3 wires to get up and running. The ADXL335 needs 5 wires to get set up but in this example I soldered pin headers instead and then found a nice spot to plug it in directly on the Arduino.
Hamster Ball (5 inches minimum) - $5.00(optional)
The hamster ball is what will hold everything together. I wanted a clear round object and hamster ball seemed to fit the bill. The one shown in the link is just big enough to house the Arduino, battery pack, and the LED strip. I think i would even go bigger so I have more room to get my hands in there. If you are just testing out the LED strip and/or the ADXL335 then this may be optional for you.
overall the cost of this project is around $40-$50 if you are starting from ground zero. If you have used Arduino before many of the items should already be in your possession. Add in the idea that almost all of these components are reusable and great for other projects the cost goes way down compared to the value. There are also great Arduino kits that come with most of the basics bundled together.
Step 2: ADXL335 3-Axis Accelerometer
Adafruit link for ADXL335
Global Variables
Global variables are written above the void setup() section of our code. This is a common spot to declare variables for the rest of the code. This is also the spot to name the pins that will be connected to the ADXL335 Accelerometer. "const int" stands for constant integer and should appear blue in the code. Integers are whole numbers and constant means this whole number will not be changing. after "const int" I put the name of the variable. groundPin, powerPin, xPin... are names I created so I can keep track of what pin on the arduino is connected to which pin on the accelerometer. After that we want to set that variable equal= to the corresponding pin number. The attached image of an Arduino Uno shows all the different pin numbers. pin# 15-19 corresponds to analog pins A1-A5. "const int groundPin = A5; " would also be acceptable. This line of code all together means when ever I write groundPin, replace it with the number 19. I also declared three more variables as int variables. these are integer variables also but they are not constant, they can change throughout the code. These variables are going to be used to store the x, y, and z values as the ADXL moves around
const int groundPin = 19; const int powerPin = 18; const int xPin = 17; const int yPin = 16; const int zPin = 15; int xVal = 0; int yVal = 0; int zVal = 0;
void setup()
The void setup() section of a sketch is used for activating the general INPUT/OUTPUT pins (GPIO) and for parts of our code that we need to set up once before going into our void loop(). The void setup() only runs one time when the arduino is first turned on. Looking back to my void setup() it may seem odd that I am using pin 18 and 19 as my ground and power pins. If you look into the void setup() you can see I am going to activate 18 and 19 as OUTPUTS and digitalWrite 18 to HIGH and 19 to LOW. HIGH and LOW correspond to 5Volts and 0Volts. The purpose of this is to make 18 my powerpin and 19 my groundpin. I know there are many other %v and GND pins on the Arduino but can you see how neatly the ADXL335 fits now. This saves me extra jumper wires and gets rid of the need for a bread board. Finally I activated the Serial monitor using Serial.begin(9600); this will let us see the values that the accelerometer is reading for x, y, and z later on.
void setup() { pinMode(groundPin,OUTPUT); pinMode(powerPin,OUTPUT); digitalWrite(groundPin,LOW); digitalWrite(powerPin,HIGH); Serial.begin(9600); }
void loop()
The final section of our code is the void loop(). This section runs from top to bottom and then repeats continually as long as the Arduino has power. In this section we will constantly read the 3 axiss'ss of the accelerometer and print it to the Serial monitor for us to see. The first line of code shows us analog reading the xPin (pin 17 or A3) and saving that value in the variable xVal. analogRead is used to check the voltages coming into an analog pin. It will return a value between 0 and 1023 which corresponds to voltages of 0 - 5 volts. This is more detailed and gives us a range of values as opposed to digitalRead which only reads 0 OR 1023 which is 0 OR 5volts. digitalRead is more useful for something like a button which is either on or off. For this ADXL we dont want to know how much we are tilted towards x, y or z so we want to analogRead. In order to work with these numbers we should print these newly saved values to the serial monitor. Serial.print(xVal) will write on my serial monitor what ever xVal is equal to that time through the loop. Serial.Println means write on the serial monitor and then skip a line. My first suggestion would be only Serial.print one of the variables at a time so you can see clearly. You can also experiment with where to put the "ln" to see how that changes the look of the serial monitor. Idealy you want to see each value in its own scrolling column with labels and nice spaces.
void loop(){ xVal = analogRead(xPin); yVal = analogRead(yPin); zVal = analogRead(zPin); Serial.print(xVal); Serial.print(yVal); Serial.println(zVal); }
That is a completed sketch for the ADXL335 accelerometer. You now have a tilt sensor that can give you values for how much you are titled in each direction. This can be the fin comes when you start thinking of all the cool things you can do in response to this tilt. With this major section of my project completed and bug free I can now look to combine this with the RGB Strip.
Step 3: Neopixel Adressable RGB LED Strip
The next step in this project is to get our RGB strip up and running. This involves practicing and learning about the neopixel strip and neopixel library. Projects with multiple components should usually be tested individually first. Get a good understanding of each part of a project before combining them and you will save a lot of trouble later. This Adafruit Uber Guide was a great place to start for me. All the material is in one location so you wont need to bounce around the internet trying to match different codes. The strand Test example built into the neopixel library shows several types of patterns with interesting coding behind them. Trying to decode how each effect is made was an enjoyable way to get more experience with coding for me. The video shows a Strip of RGB LED cycling through several paterns
Solder Wires to Strip
Depending on which strip of LEDs you buy, you may need to solder wires to the end. A brand new strand usually comes with wires attached but this particular spool has been used before. You can cut the strip along the copper connections shown above. I put dabs of solder on those copper tabs and then soldered male jumper wires to them. You can see in the picture above that the ground (GND), power (5V), and digital (D0) conections now have usable wires attached to them. I am not the worlds greatest soldererer....yet. I do find it a fun part of most projects, little action in the middle of a long story. get it down how ever you can just make sure none of the three connections get soldered to each other. Stick with that rule and you will be find for the most part.
How Many Neopixels?
Each individual Neopixel on the strip draws its own energy. Most of the bugs related to RGB strips have been related to the amount of current supplied to the strip. while one Neopixel or a few can be run fairly easily, sometimes I have close to 150 running at one time. If there is not enough current to run each neopixel and the arduino microcontroller, you will be to see some funky effects. Some times the arduino will shut itself off because there isn't enough power. Other times the colors will change towards the end of the strip as it runs out of power. you will actually notice some colors take way more energy than other colors. This is part of the fun of tinkering with something new. The same Adafruit Uber Guide from earlier breaks down in detail how much power you will need. I have about 30 LEDs on my strip now and it almost always works. I would do somewhere between 10 and 20 LEDs to be safe.
Important Code
Above the void setup() there needs to be a few lines of code to properly use the strip and libraries. you will need to include the adafruit_neopixel library and enter some parameters using the lines of code below. If the library is installed correctly it should turn orange. Make sure the PIN number matches up with the pin you connect the strip to the arduino with. This example uses pin D6. The first parameter is the number of leds on your particualr strip. When you 30 infront of the PIN that means I have 30 individual RGB LED to control. Adjust this number to match your strip.
#include <Adafruit_NeoPixel.h>
#define PIN 6 Adafruit_NeoPixel strip = Adafruit_NeoPixel(30, PIN, NEO_GRB + NEO_KHZ2800);
In the void setup() you need only two lines of code. These lines turn on the strip and then set all the LEDs off untill we change that in the void loop(). As you can see in the strand test example, there are seemingly countless possibilities and combinations of collors and effects you can use for the NeoPixel strip. I choose to take the color wipe function from that example code and use it for the color sphere. At the bottom of the code there is now a new function used to change the color of all the LEDs. If you write colorWipe in your loop it will then call the colorWipe function written bellow. you need to enter some info so it knows how to do this. After writing colorWipe(strip.Color( you need to enter the color you want and how long to wait between changing the color of the next pixel. The first three numbers set the color we want. The order is (red,green,blue) and the numbers can be between 0 and 255. 255 would be make brightness and 0 would be basically off. The fourth number after that is the delay. The loop below allows the strip to change each LED to red with a 50 millisecond delay between then turn them all to green and finally turning them all to blue.
void setup(){ strip.begin(); strip.show(); } void loop(){ colorWipe(strip.Color(255,0,0),50); colorWipe(strip.Color(0,255,0),50); colorWipe(strip.Color(0,0,255),50);
} void colorWipe(unit32_t c, uint8_t wait{ for (uint16_t i = 0; i < strip.numPixels(); i++){ strip.setPixelColor(i,c); strip.show(); delay(wait); } }
Step 4: Mashing the Codes Together
This is the time to bring it all together. Be careful when trying to mix coded together, do more than simply cut and paste one after another. The general rule should be everything above the void setup() should go together, everything in the void setup() should be together and then everything in the void loop should be but in a logical order. The only real addition in this code is the if statements that control which color to make the ball as it is moved. There are endless ways to change the color and I plan to continuing to search for more innovative ways. This example makes the ball red when The xVal is bigger than both the yVal and zVal. It turns green when yVal is the most and blue when zVal. The human way to read the first if statement ..... If xVal is greater than yVal and zVal, Then change the strip to red. If ("this is true") {then do everything in between here } This is the final code for the complete project but please feel free to get more creative with it. There is also added code so we can see the x y and z values. That is done using the Serial.print(); function. The way its written will show all three values with a tab between them so there is three nice columns
#include <Adafruit_NeoPixel.h> #define PIN 6 Adafruit_NeoPixel strip = Adafruit_NeoPixel(30, PIN, NEO_GRB + NEOKHZ800); const int groundPin = 19; const int powerPin = 18; const int xPin = A3; const int yPin = A2; const int zpin = A1; int xVal = 0; int yVal = 0; int zVal = 0; void setup (){ Serial.begin(9600); strip.begin(); strip.show(); pinMode(grounPin,OUTPUT); pinMode(powerPin, OUTPUT); digitalWrite(groundPin,LOW); digitalWrite(powerPin,HIGH); } void loop(){ xVal = analogRead(xPin); yVal = analogRead(yPin); zVal = analogRead(zPin); Serial.print(xVal); Serial.print("\t"); Serial.print(yVal); Serial.print("\t"); Serial.println(zVal); if (xVal > yVal && xVal > zVal){ colorWipe(strip.color(255,0,0),50); } if (yVal > xVal && yVal> zVal){ colorWipe(strip.Color(0,255,0),50); } if (zVal > xVal && zVal > yVal){ colorWipe(strip.Color(0,0,255),50); } void colorWipe(uint32_t c, uint8_t wait){ for(uint16_t i = 0; i < strip.numPixels(); i++){ strip.setPixelColor(i,c); strip.show(); delay(wait); } }
Recommendations
We have a be nice policy.
Please be positive and constructive.
3 Comments
best instructable ever written
Great Job. You must be one hell of a teacher...
So coo! | http://www.instructables.com/id/Color-Sphere-With-Arduino/ | CC-MAIN-2018-17 | refinedweb | 2,873 | 72.16 |
Row based testing in MbUnit (i.e. RowTest)
Jonathan de Halleux, aka Peli, never ceases to impress me with his innovations within MbUnit. In case you're not familiar with MbUnit, it's a unit testing framework similar to NUnit.
The difference is that while NUnit seems to have stagnated, Jonathan is constantly innovating new features, test fixtures, etc... for a complete unit testing solution. In fact, he's even made it so that you can run your NUnit tests within MbUnit without a recompile.
His latest feature is not necessarily a mind blower, but it's definitely will save me a lot of time writing the same type of code over and over for testing a range of values. I'll just show you a code snippet and you can figure out what it's doing for you.
[TestFixture]public class DivisionFixture{ [RowTest] [Row(1000,10,100.0000)] [Row(-1000,10,-100.0000)] [Row(1000,7,142.85715)] [Row(1000,0.00001,100000000)] [Row(4195835,3145729,1.3338196)] public void DivTest(double num, double den, double res) { Assert.AreEqual(res, num / den, 0.00001 ); }}
And if you're anal like me and wondering why I chose "num" instead of "numerator" etc... Purely for blog formatting reasons. ;)
UPDATE: Jonathan points out that negative assertions are also supported. Here's an illustrative code snippet. I can't wait to try this out.
[RowTest] [Row(1000,10,100.0000)] ... [Row(1,0,0, ExpectedException = typeof(ArithmeticException))] public void DivTest(double num, double den, double res) {...} | http://haacked.com/archive/2004/10/20/row_based_testing.aspx/ | CC-MAIN-2015-22 | refinedweb | 253 | 67.45 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
On Fri, Mar 30, 2001 at 04:38:04PM -0800, Benjamin Kosnik wrote: > > Anyone? Anyone? What do y'all think of just listing public members for > > J. Random User? Maybe we can use this as a starting point for discussion. > > how about public member functions and all (public/private/protected) member > data? That would be pretty useful. As I'm discovering, the problem is not controlling public/private/protected (that's easy enough), but rather controlling "implementation namespace". We're getting all of the __foo helper functions and whatnot. Also, it likes to generate multiple views of the same data, e.g., a list of all things, plus the same list in namespace hierarchy, plus an alphabatized list, plus a list of files... right now I'm just trying to trim that down. I'm using Doxygen 1.2.6 and the present "make doxygen" user.cfg.in to work with. (I agree, let's ignore the maintainer file altogether for now.) Phil -- pedwards at disaster dot jaj dot com | pme at sources dot redhat dot com devphil at several other less interesting addresses in various dot domains The gods do not protect fools. Fools are protected by more capable fools. | http://gcc.gnu.org/ml/libstdc++/2001-03/msg00270.html | crawl-001 | refinedweb | 218 | 68.87 |
SQL Server :: Generate A Sequential Auto Generated Id In C#?Feb 22, 2011
I wanna generate a sequential auto generated id in c# and store it in mssql database.
format is abc0009.. abc0010.. abc9999.... abc10000 abc10001 and goes on..
I wanna generate a sequential auto generated id in c# and store it in mssql database.
format is abc0009.. abc0010.. abc9999.... abc10000 abc10001 and goes on..
I need to generate sequential user id's e.g.
[code]... need to generate sequential number for my new Users. For every user i need to show sequential number for their ID. it should be dynamic one.once they they click register i need to show this sequential number in Label.View 3 Replies
HAVING PROBLEM TO GENERATE A AUTO CONTROL NO. WHEN SOMEONE FILL THE FORM AFTER SUBMITTING THE FORM ON NEXT WINDOW ONE WILL GET AUTO CONTROL NO.View 8 Replies
I really don't know if this Post fit in this Forum section. But here I go. I need some sugestion to develop an ASP.NET code that generate complex IDs / Keys automatically. Let me be more understandable:
I want to create uniquelly IDs with more complex composition (kind of 18958351512783997769711... or crlu0xakecjzlgmjsgnedr55) or something link that (doesn't matter the lenght... those are only examples).
So, I can use that to set COOKIE ID's... PRODUCTS CODE ID's.... CUSTOMER's CART ID's... etc... But it need to generate uniquelly numbers. I don't know if you got my point I hope you did. know that Visual Studio auto-generates a lot of code when you develop a web application. How do I know which code is auto-generated? I do a lot of code editing and tweaking by hand (Im not a WYSIWYG guy), and I want to make sure I'm not editing something thats going to get over-written later by the auto-generator!View 2 Replies
I need to send automatci emails after a set interval of timeView 2 Replies
I'm trying to call a Soap Web Service, and I need to pass an Address Object to the Server. I can pass an existing address.Id to update an existing address, or if I leave the address.Id empty, it should be saved as a new Address. The Problem is that the Id is of long type, and it allways has a value of 0.And this makes problems for the server, because even if the Id=0, the Server Side function will take the Address as Existing one, and it will start to search on the Database for an Address with Id=0. Of course there is no such address, and it throws an error. When I try to call the Web Service with WebService Studio or SoapUI, and I delete the id manually, then It works as expected, as soon as I put <address id="0"> then it returns me an error - Address with Id=0 not found.So the question is, how to change the webservice definitions, or the proxy classes so that it does not generates this id="0" at all?P.S. I cannot change the Server Side method, it would have been the easiest solution, but unfortunatelly is not possible.View 1 Replies
my applicaiton is in vs.net 2005 (2.0 ) with ajax control
i m using most of the ajax control like ( tabs , calender , rating , slider ) and all
now the problem is , when the page get generate it also get generate lots of javascript + css
so my page is too large as well as too heavy , now i also test my page with YSlow + google page speed
its shwoing me some error like (merge css and js into one ) like that
I have a DropDownList and need to know its name in the code behind:
<select name="ctl00$cphMainContent$ddlTopic" onchange="javascript:setTimeout('__doPostBack('ctl00$cphMainContent$ddlTopic','')', 0)" id="ctl00_cphMainContent_ddlTopic">
<option value="All">All</option>
I need to get the value "ctl00$cphMainContent$ddlTopic"
I used the auto generated models to use my existing SQL server in a new website.in some forms I use rich text editor (CKEditor) so I added [ValidateInput(false)] to those post back actions. but that is a bad solution 'caus I have lost of other from items which needs validation. I came across a potential solution to that using [AllowHtml] which I need to add to the model properties. I tried that:
[Code]....
but that is an error "type or namespace couldn't be found.."any ideas about where the [AllowHtml] attribute should be?
In a database table, let's say I have a non-null field named "Description" with a default value of (''). If I drag this table onto the .dbml view in Visual Studio and click on the Description field, the properties will indicate that the Auto Generated Value is set to false, thus ignoring my default value of (''). This is fine, as long as I always explicitly set a value for Description when I perform an insert/update, but I ran into a problem today when I tried to do an insert without specifying a value for Description: Cannot insert NULL value into Description. The default value I set for the field was being ignored. To try and fix this error, I went back into the .dbml and set Auto Generated Value to true, and tried again - no error this time, but the data I tried to insert was ignored, and the default value, (''), was inserted instead. I do not want to explicitly set every field to a default value programmatically when I perform inserts. Is there any other way to resolve this issue other than making the fields nullable?View 3 Replies
I'm creating a column through ItemTemplate in my gridview, how can i adjust it to be my last column, unfortunately .net is making this column to be the first in grid view i want it to be the last column. The rest of the columns are automatically created by this code.
I mean
gridview1.datasource = myArrayList
gridview1.databind()
Split off from. Start new questions in a new thread.
Hey thanks a lot!!!
The following link worked fine
However, I am Having one other problem. There are four solutions in my project. All my controllers and Views are declared in the Presentation solution. There are auto-generated class files in the solution named DomainModel. So, my views inherits these auto-generated files from the DomainModel solution. The problem is that I can't modify these auto-generated files by using Data Annotations in there. Hence, cannot accomplish my validation. I tried by creating partial classes but no result. By creating a class file in my Presentation solution and then binding it with the view the validation works fine. But my views are binded with the class files (auto-generated) from the DomainModel.
Can you/anyone help me work it out so, that validation works with those classes without modifying it.
i'm worried about doing this since my changes will be overwritten when the dbml file is auto generated again (as they often are).
i'm thinking of doing a partial class and writing out the same properties to annotate them, but worried that it will complain about duplicates, and the reason i can't even experiment brings me to the second part of my questions...
... that, the expandable arrow on my dbml file listing is missing, right clicking and selecting "View Code" just displays an empty partial class as below...
Partial Class FPDataContext
End Class
I'm using VS2010 RC and am just developing an MVC 2.0 app where i want to be able to use the UI Annotations such as [UIHint("RelativeDateTime")]
edit:
here is my VB version edit as an example...
Imports System.ComponentModel.DataAnnotations
<MetadataType(GetType(CommentMetaData))> _
Partial Public Class Comment
End Class
Public Class CommentMetaData
<UIHint("PostedSince")> _
Public Property DateAdded() As DateTime
End Class
When you add a Web Reference in an ASP.NET project in Visual Studio the web application's root namespace is always added.So, if I add a web reference called MyWebService and the default namespace of the application is MyApplication the namespace of the generated proxy class will be: MyApplication.MyWebService.
However, I want to be able to specify which namespace to use for the generated class (to skip the default namespace and have the namespace be called simply MyWebService).Is using wsdl.exe through the command line the only way of accomplishing this? I don't want to manually edit the generated class (since it can get re-generated).
I am using Sqlserver2008.
I want to konw how to get TransactionID(customized auto generated number) with 9dgits(2 alphabets, 7 numerics).
[Code].....
I'm working on a RoleProvider in .NET, using Fluent NHibernate to map tables in an Oracle 9.2 database.
The problem is that the many-to-many table connecting users and roles uses a primary key generated from a sequence, as opposed to a composite key. I can't really change this, because I'm writing it to be implemented in a larger existing system.
Here is my UserMap:
public UserMap()
{
this.Table("USR");[code].....
Yet, this is giving me the error:
Type 'FluentNHibernate.Cfg.FluentConfigurationException' in assembly 'FluentNHibernate, Version=1.0.0.593, Culture=neutral, PublicKeyToken=8aa435e3cb308880' is not marked as serializable.
Is there a simple fix to allow this HasMayToMany to use my PersistentObjectMap extension? I'm thinking I may have to add a convention for this many-to-many relationship, but I don't know where to start with that, since I've just started using NHibernate and Fluent NHibernate only recently.
EDIT: I think I've found a possible solution here:
I'll try the above method of creating an entity and a class map for the linking table and post my findings.
EDIT 2: I created a linking entity as mentioned in the above blog post and downloaded the newest binaries (1.0.0.623).
This helped me discover that the issue was with setting lazy load and trying to add roles to the user object in a completely new session.
I modified the code to move OpenSession to the BeginRequest of an HttpModule as described here. After doing this, I changed my data access code from wrapping the open session in a using statement, which closes the session when it is finished, to getting the current session and wrapping only the transaction in a using statement.
This seems to have resolved the bulk of my issue, but I am now getting an error that says "Could not insert collection" into the USR_ROLE table. And I'm wondering if the above code should work with a UserRoleMap described as:
public UserRoleMap()
{
this.Table("USR_ROLE"); [code]....
Hibernate's documentation for many-to-many relationship suggests creating an object to maintain a one-to-many/many-to-one, as in an ERD. I'm sure this would be much easier with conventional naming standards, but I have to stick with certain abbreviations and odd (and not always properly-implemented) conventions.
I have written a t4 template that basically replaces the work done by StronglyTypedResourceBuilder to give design time access to string resources as properties of classes, one class per resource file. Why? Well I needed to add some customer code that handles string token substitution and a few other customizations. Anyway, the template is working out really well and I may blog on it soon.
In the mean time though, following a common pattern, I have one .resx file for each view, master page, controller, etc. I would really like my t4 template to add a property to each such entity that gives quick access the custom resource class associated with it. For controllers this is easy. T4MVC is ensuring that they are a all declared partial. All I need to do is create the appropriate partial class in my output that declares a readonly property that returns an instance of the appropriate generated resource class.
The Problem:
I would like to do the same thing, inject generated code, into my views. Were this traditional ASP.Net, each .aspx page would have a .aspx.cs page and possibly an .aspx.designer.cs page that are all partial classes extending in the aspx page's class definition. There is no such thing by default in MVC and for good reason. However, I think for my purposes, if there is a way to do this, it would be the way to go.
I could subclass ViewPage and add a generic type parameter and a property that returns the class of that type but that has certain complications. I think adding auto generated code to each view via partial class (that is what partials are for after all) is the way to go.
I have made a basic attempt. I have created a .aspx.cs file for a view and placed a code behind attribute in the page declaration but the class generated from the view seems to reside in a different assembly any my "partial class" ends up as its own class in the same assembly as all my other code.
I have a GridView bound to a DataTable that I construct. Most columns in the table contain the raw HTML for a hypelinklink, and I would like that HTML to render as a link in the browser, but the GridView is automatically encoding the HTML, so it renders as markup.
How can I avoid this without explicitly adding HyperLink, or any other, columns?
Is there anyway with visual web developer express to change the way the "Configure Datasource Wizard" generates it's sql statement commands to surround the database tables with quotes instead of brackets for odbc connections to non microsoft databases?
For example, I have a database that uses a DataDirect odbc driver and visual web developer seems to connect to it just fine with the connection string but the "Configure Datasource Wizard" generates the following code:
SELECT * FROM [Customers] which breaks my odbc driver instead of using SELECT * FROM "Customers" which works fine with my odbc driver. Most of the other software programs I use generate quotes instead of brackets including Crystal Reports, PrimalScript, PrimalSql, Admin Script Editor and FileMaker Pro Advanced so it would really if we could get visual web developer to generate the quote syntax also.
I have noticed that you can go back in the code later and change the statement, but it sure would be convenient if I could just tweak something so that Visual Studio would generate it correctly with the wizard....
I am using asp.net grid view which has is populated dynamically that means it has auto generated columns at run time.
I just want to know db type of these columns whether a column is text , date or int type.
I am currently working on a module in which I am creating an auto generated column gridview, and I have set the Auto generate edit button field to true. What happens now when I click the edit button The primary key field is turned in to edit mode which I want to restrict from being updated.
To restrict the same I have hard coded a template field which holds that column, But as I have set the AutoGenerateColumns Property to true the gridview automatically creates a duplicate field for holding the primary key column. For which I want to delete that particular column.
My Designer Code is:-
[Code]....
in page load I am binding the grid view.
Even I have tried to hide that column doesn't make any difference for?
In short i want to access auto generated textbox (with out any id predefined ) values from footer of gridview
is there anyway to set gridview autogenerated column widthView 2 Replies
I have generated entity framework designer classes . After Generating the designer what is the most nicest and cleanest way to apply data annotation to the properties there . I have 3 classes thereView 2 Replies
Currently, I am programming a web-based application with Visual Studio.NET 2003, and having a problem with an err below :
An error has occurred because a control with auto-generated id 'dgPreventiveMaintenanceSchedule:_ctl5:_ctl3' could not be located to raise a postback event. To avoid this error, explicitly set the ID property of controls that raise postback events.
That err occurs while I am rendering my datagrid for certain data. In the other words, it occasionally happens.
I've been trying to debug one by one, but I still don't any clues for this kind of err.
Is there anyway to auto-generate ASP.NET controls based on fields in a SQLDataSource? It is really easy to create a form in WinForms by dragging the fields onto the form. You can even determine which control will be used (like a dropdown). Is there anyway to do this in ASP.NET? I don't want a DetailsView since I need to have separate controls that are created.View 1 Replies | http://asp.net.bigresource.com/SQL-server-generate-a-sequential-auto-generated-id-in-c--nOX4S7xkQ.html | CC-MAIN-2016-22 | refinedweb | 2,851 | 62.98 |
To extract text (plain text or html text) from a pdf file is simple in python, we can use PyMuPDF library, which contains many basic pdf operations. In this tutorial, we will introduce you how to extract text from pdf files with it.
Import library
import sys, fitz
Prepare a pdf file
pdf = "F:\\test.pdf"
Open this pdf
doc = fitz.open(pdf)
Extract text page by page
for page in doc: text = page.getText("text") html_text = page.getText("html") print(text) print(html_text)
Notice:
1.To extract plain text, we should use page.getText(“text”) method
2.To extract html text, we should use page.getText(“html”) method
PyMuPDF also can extract other types of text, such as xhtml, xml, dict. You can check here more details.
thank you so much for the article. I have a problem I have a number of pdf files and I want to extract text from the first page of each pdf file and save the text either to a text file or CSV file.
Thank you
In order to extract text of first page, you can use pymupdf. Moreover, some pdf files can not be extracted, because these pdf files may be created by scanner, in this situation, you can extract text from images using python. Convert first page of pdf to image then extract text. More details on here: | https://www.tutorialexample.com/best-practice-to-python-extract-plain-text-and-html-text-from-pdf-with-pymupdf-python-pdf-operation/ | CC-MAIN-2021-31 | refinedweb | 228 | 84.88 |
to accept the grades from the userto accept the grades from the user
package regradesaverage; public class ReGradesAverage { /** * @param args the command line arguments */ public static void main(String[] args) { int[] array = {87,68,94,100}; int total = 0; for (int counter = 0; counter < array.length; counter++) total += array[counter]; System.out.printf("Total sum of Grades : %d\n", total); System.out.printf("The Grade Average is: %d\n", total/array.length); } }
893973 wrote:Exactly, you need to create a method.
I need to create a method called "average" that takes has an array of strings as a parameter and returns double. | https://community.oracle.com/message/10738383 | CC-MAIN-2016-50 | refinedweb | 102 | 57.06 |
Fri 3 Jan 2014
Reportlab: How to Create Landscape Pages
Posted by Mike under Cross-Platform, Python
The other day I had an interesting task I needed to complete with Reportlab. I needed to create a PDF in landscape orientation that had to be rotated 90 degrees when I saved it. To make it easier to lay out the document, I created a class with a flag that allows me to save it in landscape orientation or flip it into portrait. In this article, we’ll take a look at my code to see what it takes. If you’d like to follow along, I would recommend downloading a copy of Reportlab and pyPdf (or pyPdf2).
Reportlab Page Orientation
There are at least two ways to tell Reportlab to use a landscape orientation. The first one is a convenience function called landscape that you can import from reportlab.lib.pagesizes. You would use it like this:
from reportlab.lib.pagesizes import landscape, letter from reportlab.pdfgen import canvas self.c = canvas self.c.setPageSize( landscape(letter) )
The other way to set landscape is just set the page size explicitly:
from reportlab.lib.pagesizes import landscape from reportlab.pdfgen import canvas from reportlab.lib.units import inch self.c = canvas self.c.setPageSize( (11*inch, 8.5*inch) )
You could make this more generic by doing something like this though:
from reportlab.lib.pagesizes import landscape from reportlab.pdfgen import canvas from reportlab.lib.units import inch width, height = letter self.c = canvas self.c.setPageSize( (height, width) )
This might make more sense, especially if you wanted to use other popular page sizes, like A4. Now let’s take a moment and look at a full-fledged example:
import pyPdf import StringIO from reportlab.lib import utils from reportlab.lib.pagesizes import landscape, letter from reportlab.platypus import (Image, SimpleDocTemplate, Paragraph, Spacer) from reportlab.lib.styles import getSampleStyleSheet from reportlab.lib.units import inch, mm ######################################################################## class LandscapeMaker(object): """ Demo to show how to work with Reportlab in Landscape orientation """ #---------------------------------------------------------------------- def __init__(self, rotate=False): """Constructor""" self.logo_path = "snakehead.jpg" self.pdf_file = "rotated.pdf" self.rotate = rotate self.story = [Spacer(0, 0.1*inch)] self.styles = getSampleStyleSheet() self.width, self.height = letter #---------------------------------------------------------------------- def coord(self, x, y, unit=1): """ Helper class to help position flowables in Canvas objects () """ x, y = x * unit, self.height - y * unit return x, y #---------------------------------------------------------------------- def create_pdf(self, canvas, doc): """ Create the PDF """ self.c = canvas self.c.setPageSize( landscape(letter) ) # add a logo and set size logo = self.scaleImage(self.logo_path, maxSize=90) logo.wrapOn(self.c, self.width, self.height) logo.drawOn(self.c, *self.coord(10, 113, mm)) # draw a box around the logo self.c.setLineWidth(2) self.c.rect(20, 460, width=270, height=100) ptext = "<font size=16 color=red><b>Python is amazing!!!</b></font>" p = Paragraph(ptext, style=self.styles["Normal"]) p.wrapOn(self.c, self.width, self.height) p.drawOn(self.c, *self.coord(45, 101, mm)) #---------------------------------------------------------------------- def save(self): """ Save the PDF """ if not self.rotate: self.doc = SimpleDocTemplate(self.pdf_file, pagesize=letter, leftMargin=0.8*inch) else: fileObj = StringIO.StringIO() self.doc = SimpleDocTemplate(fileObj, pagesize=letter, leftMargin=0.8*inch) self.doc.build(self.story, onFirstPage=self.create_pdf) if self.rotate: fileObj.seek(0) pdf = pyPdf.PdfFileReader(fileObj) output = pyPdf.PdfFileWriter() for page in range(pdf.getNumPages()): pdf_page = pdf.getPage(page) pdf_page.rotateClockwise(90) output.addPage(pdf_page) output.write(file(self.pdf_file, "wb")) #---------------------------------------------------------------------- def scaleImage(self, img_path, maxSize=None): """ Scales the image """ img = utils.ImageReader(img_path) img.fp.close() if not maxSize: maxSize = 125 iw, ih = img.getSize() if iw > ih: newW = maxSize newH = maxSize * ih / iw else: newH = maxSize newW = maxSize * iw / ih return Image(img_path, newW, newH) #---------------------------------------------------------------------- if __name__ == "__main__": pdf = LandscapeMaker() pdf.save() print "PDF created!"
If you run the code above (and you have a logo to use), you will see something very similar to the screenshot at the beginning of the article. This makes laying out the document easier because text and images are horizontal. Let’s spend a few minutes parsing the code. In the init, we set up a few items, such as the logo, the PDF file’s name, whether to rotate or not and a few other items. The coord method is something I found on StackOverflow that helps position flowables easier. The create_pdf method is where most of the magic is. It calls the landscape function that we imported. This function also draws the logo, the rectangle and the words on the document.
The next method is the save method. If we don’t do the rotation, we create a SimpleDocTemplate, pass it the PDF file name and build the document. On the other hand, if we do turn on rotation, then we create a file object using Python’s StringIO library so that we can manipulate the PDF in memory. Basically we write the data to memory, then we seek to the beginning of the faux file so we can read it with pyPdf. Next we create a pyPdf writer object. Finally we loop through the PDF that’s in memory page by page and rotate each page before writing it out.
The last method is just a handy method I was given from the wxPython group that I use for scaling images. A lot of the time you will find yourself with images that are too large for your purposes and you will need to scale them down to fit. That’s all this method does.
Once you’ve got everything where you want it, you can change the code at the end to the following:
#---------------------------------------------------------------------- if __name__ == "__main__": pdf = LandscapeMaker(rotate=True) pdf.save() print "PDF created!"
This will cause the script to do the rotation and the output should look something like this:
Wrapping Up
Creating and editing PDFs in the landscape orientation is actually pretty easy to do in Python and Reportlab. At this point, you should be able to do it with aplomb! Good luck and happy coding!
Related Articles
- Reportlab – All About Fonts
- A Simple Step-by-Step Reportlab Tutorial
- Reportlab: How to Add Page Numbers
- Reportlab Tables – Creating Tables in PDFs with Python
- Reportlab: Mixing Fixed Content and Flowables | http://www.blog.pythonlibrary.org/2014/01/03/reportlab-create-landscape-pages/ | CC-MAIN-2014-15 | refinedweb | 1,040 | 59.8 |
Often you may want to test a single piece of code. For example, say you forget how the
% operator works with negative numbers, or you must determine how a certain API call operates. Writing, compiling, and running a small program repeatedly just to test small things can prove annoying.
With that in mind, in this Java Tip, I present a short program that compiles and runs Java code statements simply by using tools included in Sun's JDK 1.2 and above.
Note: You can download this article's source code from Resources.
Use javac inside your program
You'll find the
javac compiler in the
tools.jar library found in the
lib/ directory of your JDK 1.2 and higher installation.
Many developers do not realize that an application can access
javac programmatically. A class called
com.sun.tools.javac.Main acts as the main entry point. If you know how to use
javac on the command line, you already know how to use this class: its
compile() method takes the familiar command-line input arguments.
Compile a single statement
For
javac to compile any statement, the statement must be contained within a complete class. Let's define a minimal class right now:
/** * Source created on <this date> */ public class <Temporary Class Name> { public static void main(String[] args) throws Exception { <Your Statement> } }
Can you figure out why the
main() method must throw an exception?
Your statement obviously goes inside the
main() method, as shown, but what should you write for the class name? The class name must possess the same name as the file in which it is contained because we declared it as
public.
Prepare a file for compilation
Two facilities included in the
java.io.File class since JDK 1.2 will help. The first facility, creating temporary files, frees us from choosing some temporary name for our source file and class. It also guarantees the file name's uniqueness. To perform this task, use the static
createTempFile() method.
The second facility, automatically deleting a file when the VM exits, lets you avoid cluttering a directory or directories with temporary little test programs. You set a file for deletion by calling
deleteOnExit().
Create the file
Choose the
createTempFile() version with which you can specify the new file's location, instead of relying on some default temporary directory.
Finally, specify that the extension must be
.java and that the file prefix should be
jav (the prefix choice is arbitrary):
File file = File.createTempFile("jav", ".java", new File(System.getProperty("user.dir"))); // Set the file to delete on exit file.deleteOnExit(); // Get the file name and extract a class name from it String filename = file.getName(); String classname = filename.substring(0, filename.length()-5);
Note that you extract the class name by removing the
.java suffix.
Output the source with your short code segment
Next, write the source code to the file through a
PrintWriter for convenience:
PrintWriter out = new PrintWriter(new FileOutputStream(file)); out.println("/**"); out.println(" * Source created on " + new Date()); out.println(" */"); out.println("public class " + classname + " {"); out.println(" public static void main(String[] args) throws Exception {"); // Your short code segment out.print(" "); out.println(text.getText()); out.println(" }"); out.println("}"); // Flush and close the stream out.flush(); out.close();
The generated source code will look nice for later examination, with the added benefit that if the VM exits abnormally without deleting the temporary file, the file will not be a mystery if you stumble upon it later.
The short code segment, if you notice, is written with
text.getText(). As you will see shortly, the program uses a small GUI (graphical user interface), and all your code will be typed into a
TextArea called
text.
Use javac to compile
To use the compiler, create a
Main object instance, as mentioned above. Let's use an instance field to hold this:
private com.sun.tools.javac.Main javac = new com.sun.tools.javac.Main();
A call to
compile() with some command-line arguments will compile the aforementioned file. It also returns a status code indicating either success or a problem with the compile:
String[] args = new String[] { "-d", System.getProperty("user.dir"), filename }; int status = javac.compile(args);
Run the freshly compiled program
Reflection nicely runs code inside an arbitrary class, so we'll use it to locate and execute the
main() method where we placed our short code segment. In addition, we process the returned status code by displaying an appropriate message to give users a clean and informative experience. We found the meaning of each status code by decompiling
javac, hence we have those weird "Compile status" messages.
The actual class file will reside in the user's current working directory, as already specified with the
-d option to the
javac instance.
A
0 status code indicates that the compile succeeded:
switch (status) { case 0: // OK // Make the class file temporary as well new File(file.getParent(), classname + ".class").deleteOnExit(); try { // Try to access the class and run its main method Class clazz = Class.forName(classname); Method main = clazz.getMethod("main", new Class[] { String[].class }); main.invoke(null, new Object[] { new String[0] }); } catch (InvocationTargetException ex) { // Exception in the main method we just tried to run showMsg("Exception in main: " + ex.getTargetException()); ex.getTargetException().printStackTrace(); } catch (Exception ex) { showMsg(ex.toString()); } break; case 1: showMsg("Compile status: ERROR"); break; case 2: showMsg("Compile status: CMDERR"); break; case 3: showMsg("Compile status: SYSERR"); break; case 4: showMsg("Compile status: ABNORMAL"); break; default: showMsg("Compile status: Unknown exit status"); }
An
InvocationTargetException throws when code executes through reflection and the code itself throws some exception. If that happens, the
InvocationTargetException is caught and the underlying exception's stack trace prints to the console. All other important messages are sent to a
showMsg() method that simply relays the text to
System.err.
The non-OK status codes (codes other than zero) cause a short error message to display to inform the user what's happening if a compile problem results.
How to use the program
Yep, that's it! Other than a nice user interface and a catchy name, the program's core is complete. The program, featuring a small AWT (Abstract Windowing Toolkit) interface for input, sends all output to
System.err on the console (or wherever you decide to send it by changing the
showMsg() method).
So, what about a catchy name for the program? How about JavaStatement? It's concise, to the point, and so boring nobody would think it was chosen this way on purpose. Henceforth, all references to "the program" or "the application" will be replaced by "JavaStatement."
Write statements
Statements sometimes must be written a certain way, and you must take special care to run the JVM with the proper classpath. I elaborate on these issues below:
- If you use packages other than
java.lang, you may notice the absence of
importstatements at the top of the generated source code. You may wish to add a few convenient imports such as
java.ioor
java.utilto save some typing.
- If you do not add any imports, then for any class outside
java.lang, you must prepend the full package name. For example, to create a new
java.util.StringTokenizer, use
instead of just
new java.util.StringTokenizer(...)
.
new StringTokenizer(...)
Screenshot
The figure below shows JavaStatement's GUI, with its text area for typing statements and a Run button for executing the code. All output goes to
System.err, so watch the window from where the program runs.
Run the program
JavaStatement references two classes that may not otherwise be included in the JVM's classpath: the
com.sun.tools.javac.Main class from
tools.jar and the temporary compiled classes situated in the current working directory.
Thus, to run the program properly, use a command line such as:
java -cp <JDK location>/lib/tools.jar;. JavaStatement
where
<JDK location> represents your JDK's installed location.
For example, I installed my JDK to
C:\Java\jdk1.3.1_03. Hence, my command line would be:
java -cp C:\java\jdk1.3.1_03\lib\tools.jar;. JavaStatement
Note: You must also include the
tools.jar library in the classpath when compiling
JavaStatement.java.
If you forget to add the
tools.jar file to your classpath, you'll find complaints either about a
NoClassDefFoundError for the JVM or an
unresolved symbol for the compiler.
Finally, compile the
JavaStatement.java with the same compiler version that runs the code.
Test short bits of code? No problem!
Java developers frequently test short bits of code. JavaStatement lets you efficiently test such code by freeing you from tediously going through the cycle of writing, compiling, and running many small programs.
Beyond JavaStatement, perhaps challenge yourself by exploring how to use
javac in your own programs. Off the top of my head, I can think of two
javac uses:
- A rudimentary form of scripting: Coupled with using your own classloader, you could make the compiled classes reference objects from within a running program
- A programming language translator: It is simpler to translate a program (say written in Pascal) into Java, and compile it into classfiles, than to compile it yourself
Remember,
javac does the hard work of translating a high-level language into low-level instructions—freeing you to do your own thing with Java!
Learn more about this topic
- To download this article's program, go to
- For more on the javac compiler, read the "javac—The Java Compiler" page
- "The Reflection API" trail by Dale Green in Sun Microsystems' Java Tutorial (Sun Microsystems, 2002)
- The Javadoc for java.lang.reflect
- Read "A Java Make Tool for the Java Language" (the tool uses
javacas we have in this article)
- Browse the Development Tools Tips 'N Tricks | http://www.javaworld.com/article/2077497/testing-debugging/java-tip-131--make-a-statement-with-javac-.html | CC-MAIN-2015-22 | refinedweb | 1,627 | 56.55 |
jGuru Forums
Posted By:
j_s
Posted On:
Tuesday, May 3, 2005 02:13 AM
Would anyone have help on mouse events for a simple generic class?
The program I have is an applet which draws out a grid of squares - say 20 by 20 squares. The squares are a certain random color. Each of the squares is an instance of this "ColorSquare" class that I have defined in a separate .java file (not embedded/inline) and I am instantiating them in "main". So these squares are NOT currently Rectangles or whatever... should they be
Shapes? Areas? *shudder* Custom-Components? etc. with x, y coords and sizes.
These squares right now are just points and sizes. I'm guessing these ColorSquare objects need to be something more 'physical' for these ColorSquare objects to handle mouse events.
So, the problem is that my ColorSquare class cannot handle mouse events that are unique to each square, or each instantiation. I want EACH ColorSquare object to have its own mouse event handler and change color upon the mouse "mousing over" each square. These are not components and I do not intend to add them to any panes or containers or other complications. Do I need to do this?
I simply want to have the capability to mouse over these squares which are instantiated in Main, handle mouse events in the ColorSquare class and to handle double-buffered drawing with paint()
in Main
as well.
I'm currently using...
public class ColorSquare extends Applet implements MouseListener
{
public ColorSquare(int xx, int yy, int ww, int hh, int id)
{
x = xx;
y = yy;
w = ww;
h = hh;
sqID = id;
this.addMouseListener(this); //doesn't work btw
}
.
.
.
//mouse handler methods...
public void mouseClicked (MouseEvent me)
{
System.out.println("click " +sqID);
}
public void mouseEntered (MouseEvent me)
{
System.out.println("entered " +sqID);
}
public void mousePressed (MouseEvent me) {}
public void mouseReleased (MouseEvent me) {}
public void mouseExited (MouseEvent me) {}
I'm extending Applet here in an attempt to utilize the addMouseListener method. | http://www.jguru.com/forums/view.jsp?EID=1242154 | CC-MAIN-2015-35 | refinedweb | 331 | 73.88 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Problem with printing custom report
Hello,
After many things tried, I've still a problem.
I'm developping a custom loyalty coupon module and I want to print them.
I've got a popup view with a preview of a coupon that seems like this :
In this picture, the print button is in a file called "views.xml" and the code is :
In the same class, I've created a declaration for my report as this :
loyalty, here, is the name of the python file loyalty.py (the model class for my module) which contains the class loyalty.coupon.
In another file, called by the report above, there is my template for the pdf that will be print by clicking the previous button. Its name is "report.xml" and its code is :
But, it's still not working. I don't understand where the problem is, even if it's possible to read an error message when clicking on the button that is :
AttributeError: 'loyalty.coupon' object has no attribute '354'
Please, help me. The lack of information about this is a big problem for me, that is a beginner with Odoo.
Thank you so much in advance !
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
I forgot something : I have a class in "loyalty.py" with the render_html function for the report :
class reportCoupon(models.AbstractModel):
_name = 'report.loyalty.report_coupon'
@api.multi
def render_html(self, data):
docargs = {
'doc_ids': self.ids,
'doc_model': 'loyalty.coupon',
'docs': self.env['loyalty.coupon'].browse(self.ids),
'Date': fields.date.today(),
}
return self.env['report'].render('loyalty.report_coupon', values=docargs)
At /report/pdf/account.report_invoice/1, there is there pdf version of the account report.
But, when I go at /report/pdf/loyalty.report_coupon/1, there is nothing at all. just an "Internal Server Error".
And in the log, I can see "IndexError: list index out of range".
When I go at not existing url as /report/pdf/not_existing.report_test/1, it's the same error.
So maybe, I think the controller don't create the report to print at all ?
How can I solve that ? | https://www.odoo.com/forum/help-1/question/problem-with-printing-custom-report-103441 | CC-MAIN-2017-30 | refinedweb | 398 | 60.92 |
The QPixmapCache class provides an application-wide cache for pixmaps. More...
#include <QPixmapCache>
The QPixmapCache class provides an application-wide cache for pixmaps.
This class is a tool for optimized drawing with QPixmap. You can use it to store temporary pixmaps that are expensive to generate without using more storage space than cacheLimit(). Use insert() to insert pixmaps, find() to find them, and clear() to empty the cache.
QPixmapCache contains no member data, only static functions to access the global pixmap cache. It creates an internal QCache object for caching the pixmaps.
The cache associates a pixmap with a string (key). If two pixmaps are inserted into the cache using equal keys, then the last pixmap will hide the first pixmap. The QHash and QCache classes do exactly the same.
The cache becomes full when the total size of all pixmaps in the cache exceeds cacheLimit(). The initial cache limit is 1024 KB (1 MB); it is changed with setCacheLimit(). A pixmap takes roughly (width * height * depth)/8 bytes of memory.
The Qt Quarterly article Optimizing with QPixmapCache explains how to use QPixmapCache to speed up applications by caching the results of painting.
See also QCache and QPixmap.
Returns the cache limit (in kilobytes).
The default setting is 1024 kilobytes.
See also setCacheLimit().
Removes all pixmaps from the cache.
Looks for a cached pixmap associated with the key in the cache. If the pixmap is found, the function sets pm to that pixmap and returns true; otherwise it leaves pm alone and returns false.
Example:
QPixmap pm; if (!QPixmapCache::find("my_big_image", pm)) { pm.load("bigimage.png"); QPixmapCache::insert("my_big_image", pm); } painter->drawPixmap(0, 0, pm);
Inserts a copy of the pixmap pm associated with the key into the cache.
All pixmaps inserted by the Qt library have a key starting with "$qt", so your own pixmap keys should never begin "$qt".
When a pixmap is inserted and the cache is about to exceed its limit, it removes pixmaps until there is enough room for the pixmap to be inserted.
The oldest pixmaps (least recently accessed in the cache) are deleted when more space is needed.
The function returns true if the object was inserted into the cache; otherwise it returns false.
See also setCacheLimit().
Removes the pixmap associated with key from the cache.
Sets the cache limit to n kilobytes.
The default setting is 1024 kilobytes.
See also cacheLimit(). | https://doc.qt.io/archives/qtopia4.3/qpixmapcache.html | CC-MAIN-2021-39 | refinedweb | 399 | 76.52 |
Overview of Python Visualization Tools
An overview and comparison of the leading data visualization packages and tools for Python, including Pandas, Seaborn, ggplot, Bokeh, pygal, and Plotly.
By Chris Moffitt.
Introduction
In the python world, there are multiple options for visualizing your data. Because of this variety, it can be really challenging to figure out which one to use when. This article contains a sample of some of the more popular ones and illustrates how to use them to create a simple bar chart. I will create examples of plotting data with:
In the examples, I will use pandas to manipulate the data and use it to drive the visualization. In most cases these tools can be used without pandas but I think the combination of pandas + visualization tools is so common, it is the best place to start.
What About Matplotlib?
Matplotlib is the grandfather of python visualization packages. It is extremely powerful but with that power comes complexity. You can typically do anything you need using matplotlib but it is not always so easy to figure out. I am not going to walk through a pure Matplotlib example because many of the tools (especially Pandas and Seaborn) are thin wrappers over matplotlib. If you would like to read more about it, I went through several examples in my simple graphing article.
My biggest gripe with Matplotlib is that it just takes too much work to get reasonable looking graphs. In playing around with some of these examples, I found it easier to get nice looking visualization without a lot of code. For one small example of the verbose nature of matplotlib, look at the faceting example on this ggplot post.
Methodology
One quick note on my methodology for this article. I am sure that as soon as people start reading this, they will point out better ways to use these tools. My goal was not to create the exact same graph in each example. I wanted to visualize the data in roughly the same way in each example with roughly the same amount of time researching the solution.
As I went through this process, the biggest challenge I had was formatting the x and y axes and making the data look reasonable given some of the large labels. It also took some time to figure out how each tool wanted the data formatted. Once I figured those parts out, the rest was relatively simple.
Another point to consider is that a bar plot is probably one of the simpler types of graphs to make. These tools allow you to do many more types of plots with data. My examples focus more on the ease of formatting than innovative visualization examples. Also, because of the labels, some of the plots take up a lot of space so I’ve taken the liberty of cutting them off – just to keep the article length manageable. Finally, I have resized images so any blurriness is an issue of scaling and not a reflection on the actual output quality.
Finally, I’m approaching this from the mindset of trying to use another tool in lieu of Excel. I think my examples are more illustrative of displaying in a report, presentation, email or on a static web page. If you are evaluating tools for real time visualization of data or sharing via some other mechanism; then some of these tools offer a lot more capability that I don’t go into.
Data Set
The previous article describes the data we will be working with. I took the scraping example one layer deeper and determined the detail spending items in each category. This data set includes 125 line items but I have chosen to focus only on showing the top 10 to keep it a little simpler. You can find the full data set here.
Pandas
I am using a pandas DataFrame as the starting point for all the various plots. Fortunately, pandas does supply a built in plotting capability for us which is a layer over matplotlib. I will use that as the baseline. First, import our modules and read in the data into a budget DataFrame. We also want to sort the data and limit it to the top 10 items.
budget = pd.read_csv("mn-budget-detail-2014.csv")
budget = budget.sort('amount',ascending=False)[:10]
We will use the same budget lines for all of our examples. Here is what the top 5 items look like:
Now, setup our display to use nicer defaults and create a bar plot:
pd.options.display.mpl_style = 'default'
budget_plot = budget.plot(kind="bar",x=budget["detail"],
title="MN Capital Budget - 2014",
legend=False)
This does all of the heavy lifting of creating the plot using the “detail” column as well as displaying the title and removing the legend. Here is the additional code needed to save the image as a png.
fig = budget_plot.get_figure()
fig.savefig("2014-mn-capital-budget.png")
Here is what it looks like (truncated to keep the article length manageable):
The basics look pretty nice. Ideally, I’d like to do some more formatting of the y-axis but that requires jumping into some matplotlib gymnastics. This is a perfectly serviceable visualization but it’s not possible to do a whole lot more customization purely through pandas.
Seaborn
Seaborn is a visualization library based on matplotlib. It seeks to make default data visualizations much more visually appealing. It also has the goal of making more complicated plots simpler to create. It does integrate well with pandas. My example does not allow seaborn to significantly differentiate itself. One thing I like about seaborn is the various built in styles which allows you to quickly change the color palettes to look a little nicer. Otherwise, seaborn does not do a lot for us with this simple chart. Standard imports and read in the data:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
budget = pd.read_csv("mn-budget-detail-2014.csv")
budget = budget.sort('amount',ascending=False)[:>10]
One thing I found out is that I explicitly had to set the order of the items on the x_axis using
x_order This section of code sets the order, and styles the plot and bar chart colors:
sns.set_style("darkgrid")
bar_plot = sns.barplot(x=budget["detail"],y=budget["amount"],
palette="muted",
x_order=budget["detail"].tolist())
plt.xticks(rotation=>90)
plt.show()
As you can see, I had to use matplotlib to rotate the x axis titles so I could actually read them. Visually, the display looks nice. Ideally, I’d like to format the ticks on the y-axis but I couldn’t figure out how to do that without using
plt.yticks from matplotlib.
ggplot
ggplot is similar to Seaborn in that it builds on top of matplotlib and aims to improve the visual appeal of matplotlib visualizations in a simple way. It diverges from seaborn in that it is a port of ggplot2 for R. Given this goal, some of the API is non-pythonic but it is a very powerful. I have not used ggplot in R so there was a bit of a learning curve. However, I can start to see the appeal of ggplot. The library is being actively developed and I hope it continues to grow and mature because I think it could be a really powerful option. I did have a few times in my learning where I struggled to figure out how to do something. After looking at the code and doing a little googling, I was able to figure most of it out. Go ahead and import and read our data:
import pandas as pd
from ggplot import *
budget = pd.read_csv("mn-budget-detail-2014.csv")
budget = budget.sort('amount',ascending=False)[:>10]
Now we construct our plot by chaining together a several ggplot commands:
p = ggplot(budget, aes(x="detail",y="amount")) + \
geom_bar(stat="bar", labels=budget["detail"].tolist()) +\
ggtitle("MN Capital Budget - 2014") + \
xlab("Spending Detail") + \
ylab("Amount") + scale_y_continuous(labels='millions') + \
theme(axis_text_x=element_text(angle=>90))
print p
This seems a little strange – especially using
print p to display the graph. However, I found it relatively straightforward to figure out. It did take some digging to figure out how to rotate the text 90 degrees as well as figure out how to order the labels on the x-axis. The coolest feature I found was
scale_y_continous which makes the labels come through a lot nicer. If you want to save the image, it’s easy with
ggsave :
ggsave(p, "mn-budget-capital-ggplot.png")
Here is the final image. I know it’s a lot of grey scale. I could color it but did not take the time to do so.
| https://www.kdnuggets.com/2015/11/overview-python-visualization-tools.html | CC-MAIN-2020-29 | refinedweb | 1,460 | 63.8 |
This is your resource to discuss support topics with your peers, and learn from each other.
11-07-2012 11:22 AM
11-07-2012 11:24 AM - edited 11-07-2012 11:28 AM
Hi,
As I said I able to play Streaming video if there is no special/space character in the URL.
For special/space is used the below code;
QUrl url = QUrl::fromEncoded(""); cardrequest.setUri(url);
But its not working :-( get the same Error 13
Though the error comes while trying to play the file in the default media player, but I am play huge file if there is not space in the url, also if I put the same file in a location where there is no space in the url then it plays perfect,
11-07-2012 05:08 PM
error 13 = OutOfMemoryError
how big is that file?
11-08-2012 03:54 AM
Yes, the MediaError list says that its OutOfMemoryError , but when I put the video file in an other remote location which doesn't have any space(%20) in the video link then its wokring perfect.
11-08-2012 06:06 AM
The media player not being able to play remote URLs with spaces is actually a bug. A possible workaround for this is to put the link in a playlist .m3u file on your device and use that instead. Works for both videos and music, but it would be nice if they fixed this anyways.
11-16-2012 06:16 AM
Hello,
Yes, the Qt MediaPlayer API currently has a few big bugs:
1) URLs with spaces don't play properly
2) .m3u URLs may not work properly
I have already escalated both of these issues internally - trying to get the solution in for the next SDK update(s).
Also, the simulator, Unfortunately, has very limited supported on multimedia playback at the moment. This is due to the fact that the simulator doesn't have enough decoders (proprietory reasons). Feel free to open up a feature request through JIRA :
11-16-2012 06:33 AM
@somnathbanik (and everyone else in thread)
Just like the Qt MediaPlayer API, the Media previewer also has some bugs right now with respect to URLs with spaces. And your URL seems to have spaces (encoded).
Your code:
QUrl url = QUrl::fromEncoded("");
While we try to get all these fixes in (in the next SDK releases - I have already escalated the following issues), these are my own testing results.
I am sharing it here so that everyone knows what works and what doesnt;
These are the known results for the media previewer (paths with spaces):
"/accounts/1000/shared/music/My Track.mp3" <-- works
" Track.mp3" <-- works
"" <-- works
"
For the Qt MediaPlayer API:
// local file with spaces
sourceUrl: "/accounts/1000/shared/music/name with spaces.mp3" <-- Works
// local file with %20, instead of spaces
sourceUrl: "/accounts/1000/shared/music/name%20with%20spaces.
// URL with spaces
sourceUrl: from Duet in C Major.mp3 <-- Doesn’t Work in both the MediaPlayer API and mm-renderer API (if I directly use the C mm-renderer API in a native app)
// URL with %20 instead of spaces
sourceUrl:
.(works in a native app with mm-renderer API C API but DOESN’T work with Qt MediaPlayer API)
12-06-2012 01:42 PM
Will i be able to test this in the sim or only on the dev
01-11-2013 12:21 AM
Hi,
I am trying to
import bb.multimedia 1.0 but ndk is not giving me intallegence in the qml view. due to which i am unable to acess the properties of MediaPlayer
02-17-2014 08:58 PM
I know this post is like 2 years old but if you still have the code, could you please post the code that worked succesfully with playing audio from an http source? | http://supportforums.blackberry.com/t5/Native-Development/Playing-Streaming-Video/m-p/1980055 | CC-MAIN-2015-40 | refinedweb | 644 | 65.05 |
#include <nanomsg/nn.h>
#include <nanomsg/pair.h>
Pair protocol is the simplest and least scalable scalability protocol. It allows scaling by breaking the application in exactly two pieces. For example, if a monolithic application handles both accounting and agenda of HR department, it can be split into two applications (accounting vs. HR) that are run on two separate servers. These applications can then communicate via PAIR sockets.
The downside of this protocol is that its scaling properties are very limited. Splitting the application into two pieces allows to scale the two servers. To add the third server to the cluster, the application has to be split once more, say by separating HR functionality into hiring module and salary computation module. Whenever possible, try to use one of the more scalable protocols instead.
NN_PAIR
No protocol-specific socket options are defined at the moment.
nn_bus(7) nn_pubsub(7) nn_reqrep(7) nn_pipeline(7) nn_survey(7) nanomsg(7) | https://man.linuxreviews.org/man7/nn_pair.7.html | CC-MAIN-2021-39 | refinedweb | 156 | 50.73 |
an integer.
strupr() is not in the ANSI spec or the POSIX spec, so its use is considered "non-portable".
It is quite likely that it does not exist on your system or development environment, or it
does exist, but you are compiling with -strict or -ansi.
Parsing arg 1 of getenv makes pointer from integer without a cast.
Azure has a changed a lot since it was originally introduce by adding new services and features. Do you know everything you need to about Azure? This course will teach you about the Azure App Service, monitoring and application insights, DevOps, and Team Services.
#include <sdgstd.h>
will solve my problem.
#include <stdio.h>
#include <ctype.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
switch(argc)
{
case 2:
if(getenv(toupper(argv[1])
{
printf("%s = %s\n",toupper(argv[1]),get
}
else
{
printf("%s not found\n", toupper(argv[1]));
}
break;
}
return 0;
}
Im getting this error:
warning: passing arg 1 of 'toupper' makes integer from pointer without a cast.
warning: passing arg 1 of getenv makes pointer from integer without a cast
Regards
Eugene
cc environ.c -o environ
Gary
> #include <sdgstd.h>
> will solve my problem.
strupr() is usually prototyped in sdgstd.h, so that should solve your problem.
how do I copy the argv[1] value into a char buffer[100] variable.
From this buffer variable, I loop thru it, convert each character
into a upper case character using toupper(), then store it into
another variable and use it in my program.
strcpy(task_name,argv[1]);
for(i=0;i<strlen(task_name
task_name[i] = toupper(task_name[i]);
Gary
RomMod
Community Support Moderator
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial | https://www.experts-exchange.com/questions/20777904/getenv.html | CC-MAIN-2018-30 | refinedweb | 313 | 66.74 |
Since C++ has no built-in syntax for returning multiple values from functions and methods, programmers have been using a number of techniques to simulate this when needed, and the number has grown since the introduction of C++11. In this post I want to provide an overview of some of the options we have today for returning multiple values from functions, and possible future directions in the language.
Introduction - why multiple return values?
Multiple return values from functions are not a new concept in programming - some old and venerable languages like Common Lisp have had them since the early 1980s.
There are many scenarios where multiple return values are useful:
First and foremost, for functions that naturally have more than one value to compute. For example, the Common Lisp floor function computes the quotient and the remainder of its two operands, and returns both. Another example is std::minmax in C++11, that finds the minimal and the maximal value in a container simultaneously.
Second, multiple return values are helpful when the data structure the function operates on contains multiple values per entry. For example, Python 3's dict.items is an iterator over key / value pairs, and each iteration returns both, which is frequently useful. Similarly, in C++ the mapping family of containers provides iterators that hold key / value pairs, and methods like std::map::find logically return a pair, even though it's encapsulated in an iterator object. Another related, but slightly different example is Python's enumerate, which takes any sequence or iterator and returns index / value pairs - very useful for writing some kinds of for loops.
Third, the multiple return values may signal different "paths" - like error conditions or "not found" flags, in addition to actual values. In Go, map lookup returns a value / found pair, where "found" is a boolean flag saying whether the key was found in the map. In general, in Go it's idiomatic to return a value / error pair from functions. This method is useful in C++ as well, and I'll cover an example in the next section.
Multiple return values are so convenient that programmers usually find ways to simulate them even in languages that don't support them directly. As for new programming languages, most of them come with this feature natively supported. Go, Swift, Clojure, Rust and Scala all support multiple return values.
Multiple return values in C++ with output parameters
Back to C++, let's start our quest with the oldest and possibly still most common method - using some of the function's parameters as "out" parameters. This method is made possible by C++ (based on C before it) making a strict distinction between parameters passed by value and by reference (or pointer) into functions. Parameters passed by pointers can be used to "return" values to the caller.
This technique has old roots in C, where it's used in many places in the standard library; for example fgets and fscanf. Many POSIX functions adopt the conventions of returning an integer "error code" (0 for success), while writing any output they have into an output parameter. Examples abound - gettimeofday, pthread_create... there are hundreds (or thousands). This has become such a common convention that some code-bases adopt a special marker for output parameters, either with a comment or a dummy macro. This is to distinguish by-pointer input parameters from output parameters in the function signature, thus signaling to the user which is which:
#define OUT int myfunc(int input1, int* input2, OUT int* out) { ... }
C++ employs this technique in the standard library as well. A good example is the std::getline function. Here's how we read everything from stdin and echo every line back with a prefix:
#include <iostream> #include <string> int main(int argc, const char** argv) { std::string line; while (std::getline(std::cin, line)) { std::cout << "echo: " << line << "\n"; } return 0; }
std::getline writes the line it has read into its second parameter. It returns the stream (the first parameter), since a C++ stream has interesting behavior in boolean context. It's true so long as everything is OK, but flips to false once an error occurs, or an end-of-file condition is reached. The latter is what the sample above uses to concisely invoke std::getline in the condition of a while loop.
C++'s introduction of reference types adds a choice over the C approach. Do we use pointers or references for output parameters? On one hand references result in simpler syntax (if the line would have to be passed by pointer in the code above, we'd have to use &line in the call) and also cannot be nullptr, which is important for output parameters. On the other hand, with references it is very hard to look at a call and discern which parameters are input and which are output. Also, the nullptr argument works both ways - occasionally it is useful to convey to the callee that some output is not needed and a nullptr in an output parameter is a common way to do this.
As a result, some coding guidelines recommend only using pointers for output parameters, while using const references for input parameters. But as with all issues of style, YMMV.
Whichever style you pick, this approach has obvious downsides:
- The output values are not uniform - some are returned, some are parameters, and it's not easy to know which parameters are for output. std::getline is simple enough, but when your function takes 4 and returns 3 values, things start getting hairy.
- Calls require declarations of output parameters beforehead (such as line in the example above). This bloats the code.
- Worse, the separation of parameter declaration from its assignment within the function call can result in uninitialized variables in some cases. To analyze whether line is initialized in the example above, one has to carefully understand the semantics of std::getline.
On the other hand, prior to the introduction of move semantics in C++11, this style had serious performance advantages over the alternatives, since it can avoid extra copying. I'll discuss this a bit more later on in the article.
Pairs and tuples
The std::pair type is a veteran in C++. It's used in a bunch of places in the standard library to do things like hold keys and values of mappings, or to hold "status, result" pairs. Here's an example that demonstrates both:
#include <iostream> #include <unordered_map> using map_int_to_string = std::unordered_map<int, std::string>; void try_insert(map_int_to_string& m, int i, const std::string& s) { std::pair<map_int_to_string::iterator, bool> p = m.insert({i, s}); if (p.second) { std::cout << "insertion succeeded. "; } else { std::cout << "insertion failed. "; } std::cout << "key=" << p.first->first << " value=" << p.first->second << "\n"; } int main(int argc, const char** argv) { std::unordered_map<int, std::string> mymap; mymap[1] = "one"; try_insert(mymap, 2, "two"); try_insert(mymap, 1, "one"); return 0; }
The std::unordered_map::insert method returns two values: an element iterator and a boolen flag saying whether the requested pair was inserted or not (it won't be inserted if the key already exists in the map). What makes the example really interesting is that there are nested multiple values being returned here. insert returns a std::pair. But the first element of the pair, the iterator, is just a thin wrapper over another pair - the key/value pair - hence the first->first and first->second accesses we use when printing the values out.
Thus we also have an example of a shortcoming of std::pair - the obscureness of first and second, which requires us to always remember the relative positions of values within the pairs. p.first->second gets the job done but it's not exactly a paragon of readable code.
With C++11, we have an alternative - std::tie:
void try_insert_with_tie(map_int_to_string& m, int i, const std::string& s) { map_int_to_string::iterator iter; bool did_insert; std::tie(iter, did_insert) = m.insert({i, s}); if (did_insert) { std::cout << "insertion succeeded. "; } else { std::cout << "insertion failed. "; } std::cout << "key=" << iter->first << " value=" << iter->second << "\n"; }
Now we can give the pair members readable names. The disadvantage of this approach is, of course, that we need the separate declarations that take extra space. Also, while in the original example we could use auto to infer the type of the pair (useful for really hairy iterators), here we have to declare them fully.
Pairs work for two return values, but sometimes we need more. C++11's introduction of variadic templates finally made it possible to add a generic tuple type into the standard library. A std::tuple is a generalization of a std::pair for multiple values. Here's an example:
std::tuple<int, std::string, float> create_a_tuple() { return std::make_tuple(20, std::string("baz"), 1.2f); } int main(int argc, const char** argv) { auto data = create_a_tuple(); std::cout << "the int: " << std::get<0>(data) << "\n" << "the string: " << std::get<1>(data) << "\n" << "the float: " << std::get<2>(data) << "\n"; return 0; }
The std::get template is used to access tuple members. Again, this is not the friendliest syntax but we can alleviate it somewhat with std::tie:
int i; std::string s; float f; std::tie(i, s, f) = create_a_tuple(); std::cout << "the int: " << i << "\n" << "the string: " << s << "\n" << "the float: " << f << "\n";
Another alternative is to use even more template metaprogramming magic to create a "named" tuple (similar to the Python namedtuple type). Here's an example. There are no standard solutions for this, though.
Structs
When faced with sophisticated "named tuple" implementations, old-timers snort and remind us that in the olden days of C, this problem already had a perfectly valid solution - a struct. Here's the last example rewritten using a struct:
struct RetVal { int inumber; std::string str; float fnumber; }; RetVal create_a_struct() { return {20, std::string("baz"), 1.2f}; } // ... usage { // ... auto retvaldata = create_a_struct(); std::cout << "the int: " << retvaldata.inumber << "\n" << "the string: " << retvaldata.str << "\n" << "the float: " << retvaldata.fnumber << "\n"; }
When the returned value is created, the syntax is nice an concise. We could even omit some of the fields if their default values are good enough (or the struct has constructors for partial field initialization). Also note how natural the access to the returned value's fields is: all fields have descriptive names - this is perfect! C99 went a step further here, allowing named initialization syntax for struct fields:
RetVal create_a_struct_named() { return {.inumber = 20, .str = std::string("baz"), .fnumber = 1.2f}; }
This is very useful for self-documenting code that doesn't force you to go peek into the RetVal type every time you want to decode a value. Unfortunately, even if your C++ compiler supports this, it's not standard C++, because C++ did not adopt the feature. Apparently there was an active proposal to add it, but it wasn't accepted; at least not yet.
The rationale of the C++ committee, AFAIU, is to prefer constructors to initialize struct fields. Still, since C++ functions don't have a named parameter ("keyword argument" in Python parlance) syntax, using ctors here wouldn't be more readable. What it would allow, though, is convenient non-zero default initialization values.
For example:
struct RetValInitialized { int inumber = 17; std::string str = "foobar"; float fnumber = 2.24f; }; RetValInitialized create_an_initialized_struct() { return {}; }
Or even fancier initialization patterns with a constructor:
struct RetValWithCtor { RetValWithCtor(int i) : inumber(i), str(i, 'x'), fnumber(i) {} int inumber; std::string str; float fnumber; }; RetValWithCtor create_a_constructed_struct() { return {10}; }
This would also be a good place to briefly address the performance issue I mentioned earlier. In C++11, it's almost certain that structs returned by value will not actually copied due to the return-value optimization mechanism. Neither will the std::string held by value within the struct be copied. For even more details, see section 12.8 of the C++11 standard, in the paragraph starting with: mechanism is called copy elision by the standard.
Structured bindings: a new hope for C++17
Luckily, the C++ standard committee consists of brilliant folks who have already recognized that even though C++ has many ways to do multiple return values, none is really perfect. So there's a new proposal making the rounds now for the C++17 edition of the language, called Structured bindings.
In brief, the idea is to support a new syntax that will make tying results of tuple-returning functions easier. Recall from the discussion above that while tuples have a fairly convenient syntax returning them from functions, the situation on the receiving side is less than optimal with a choice between clunky std::get calls or pre-declaration and std::tie.
What the proposal puts forward is the following syntax for receiving the tuple returned by create_a_tuple:
auto {i, s, f} = create_a_tuple(); // Note: proposed C++17 code, doesn't compile yet
The types of i, s and f are "auto"-inferred by the compiler from the return type of create_a_tuple. Moreover, a different enhancement of C++17 is to permit a shorter tuple creation syntax as well, removing the need for std::make_tuple and making it as concise as struct creation:
std::tuple<int, std::string, float> create_a_tuple() { return {20, std::string("baz"), 1.2f}; } // Note: proposed C++17 code, doesn't compile yet
The structured bindings proposal is for returned struct values as well, not just tuples, so we'll be able to do this:
auto {i, s, f} = create_a_struct();
I sure hope this proposal will get accepted. It will make simple code pleasant to write and read, at no cost to the compiler and runtime.
Conclusion
So many possibilities, what to choose? Personally, since I believe code readability is more important than making it quick to compose, I like the explicit approach of wrapping multiple values in structs. When the returned values logically belong together, this is a great way to collect them in a natural self-documenting way. So this would be the approach I'd use most often.
That said, sometimes the two values returned really don't belong together in any logical sense - such as a stream and a string in the getline example. Littering the source code with one-off struct types named StreamAndResult or OutputAndStatus is far from ideal, so in these cases I'd actually consider a std::pair or a std::tuple.
It goes without saying that the proposed structured bindings in C++17 can make all of this even easier to write, making folks less averse to the current verboseness of tuples. | https://eli.thegreenplace.net/2016/returning-multiple-values-from-functions-in-c/ | CC-MAIN-2018-39 | refinedweb | 2,429 | 51.38 |
?
This is something I would like to start thinking about, ie. how to
implement XUpdate in such a way it is controlled from the sitemap.
(Basically because this is something I am possibly capable of contributing
,)
There seem to be several options:
1. XUpdateTransformer
2. XSLT Stylesheet which knows how to do all XUpdate transformations
3. XSLT StyleSheet that transforms an XUpdate modification specification
into into a new XSLT StyleSheet that is then used to process the data
4. XSP TagLib
5. XUpdateAction
Have I missed any?
What information needs to be available?
1. The new data (Request)
2. The XUpdate modification specification (from a pseudo protocol url)
3. The xml fragment to be modified (from a pseudo protocol url)
How might each option work?
(I am assuming for the moment that we will have some kind of SiteMap
component, that I am calling the "(?) Transformer", that does the job of
serialising a node representing the modified xml fragment to the datasource)
1. XUpdateTransformer
Use aggregation or XInclude to combine into one "document", the new data,
xupdate spec and xml fragment to be modified.
<doc>
<request>
<!-- the output of the Request or Stream Generator -->
</request>
<update>
<!-- the XUpdate modification specification -->
</update>
<source>
<!-- the xml fragment to be modified -->
</source>
</doc>
Pipe this to the XUpdateTransformer, telling it the name of the nodes holding
"request", "update" and "source" (unless we developed a new namespace for
this). Because the XUpdate transformer would need to resolve XPaths (for
copying data) in both the "request" and "source" Nodes.
Pipe this (now with the modified "source") to the (?) Transformer with the
name of the "source" node and a URL to have it serialised to Store.
2. XSLT Stylesheet which knows how to do all XUpdate transformations
Same as 1. above I believe
3. XSLT StyleSheet that transforms an XUpdate modification into XSLT
that is then used to process the data
Aggregate the "request" and the "source" as above.
Transform this with an XSLT generated by calling a separate pipeline via
the "cocoon:/" protocol that uses a standard XSLT to transform the XUpdate
modification specification into an XSLT for that job (job, not request, so
it can be cached, job represents the source-dtd/operation axis)
Transform this with the magic (?) Transformer as before.
Note: Is this XSLT generation feasible?
4. XSP TagLib
The site author writes one XSP page for each source-dtd/operation axis in
their project, embedding the same set of XUpdate modification
specifications as above but which are now treated as a TagLib.
The "source" is fed to the TagLib using the XInclude TagLib via a
pseudo-protocol url, the request data is retrieved with inline XSP Request
TagLib tags (inside the XUpdate code).
The output of the TagLib is the modified "source" which is passed down the
pipe to the (?) Transformer (etc).
5. XUpdateAction
Use whatever Generator you need to produce the response to the user's edit
The XUpdate Action is passed a reference to the "source" and "update" from
the SiteMap. It reads them both in, performs the modification (accessing
parameters in the Environment) and writes out the modified "source"
What other approaches might there be? | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200111.mbox/%3Ca05101010b82d2dd7b365@%5B10.0.1.40%5D%3E | CC-MAIN-2017-09 | refinedweb | 527 | 62.88 |
.
Commodity CEFs
- source: Morningstar Analysts
There are only five CEFs that invest in commodities and, at year-end, had been around for at least three months. Nuveen Long/Short Commodity Total Return CTF was launched only in late October and Sprott Physical Platinum & Palladium SPPP launched in mid-December.
This table is a great set piece for the dangers of investing in high-premium funds. Notice that Sprott Physical Silver PSLV posted the highest NAV total return (9.7%) and the worst share price total return (negative 10.3%) of the segment. Over the course of 2012, the fund saw its premium dissipate to 1.5% from 24.2%, even as the underlying commodity performed decently--as reflected by the NAV total return. We had written extensively about this fund's premium, most recently in an article back in July when we documented the premium's demise. A similar compression of the premium happened to PSLV's predecessor: Sprott Physical Gold PHYS in 2011. However, investors seemed to have wised up to Sprott's marketing campaign, as SPPP is trading at only a 3.9% premium after nearly a month's worth of trading.
It should be noted that the Nuveen funds (CTF and its older cousin Nuveen Diversified Commodity CFD) are commodity funds but their portfolios differ from the others in at least one important way: They are broad basket commodity funds, whereas the others are precious-metals funds. Essentially, the other five funds in this group invest in gold or silver bullion and, in the cases of Central Fund of Canada CEF and Central GoldTrust GTU, a very small percentage of assets are in gold or silver certificates. The Nuveen funds, by contrast, invest via futures contracts in a broad array of commodities, giving investors exposure to far more than precious-metals prices.
Overall, then, it was a very good year for investors in most allocation-oriented CEFs and a decent year for commodity-focused CEFs. | http://www.morningstar.com/advisor/t/69771008/year-in-review-allocation-and-commodity-cefs.htm?single=true | CC-MAIN-2015-11 | refinedweb | 328 | 53 |
0
It's a well known saying that the quality of output depends on the quality of input. A machine learning model is no different. The accuracy of your machine learning model prediction depends on the quality of data and the amount of data that is fed to it. Though we cannot control the amount of data, we certainly can control the quality of it. It's not uncommon in typical machine learning projects for teams to spend 50%-60% of their time preparing data.
In this article, you will learn different techniques for using some of the standard Python libraries, such as pandas and numpy, to convert raw data to quality data.
Data preparation in a machine learning project can be broadly subdivided into two major categories.
1. Data wrangling. Also known as data munging, this is the process of normalizing data, identifying missing data and performing cleanup either to remove the missing data or transform existing data using basic statistical operations like mean or median to impute missing values.
2. Feature engineering. In machine learning terminology, a column of data is often called a "feature." In most cases, we may need to combine two or more features to create a single feature or to derive a new feature-based on the existing feature set. For example, if the data has "employee hire date" as one of the features, we can derive the employee's age with the company by subtracting the current date and his hire date.
In this article, we are going to use a hypothetical store that has branches all over the U.S. The sample data includes columns for total inventory items and the number of items that have been sold by a specific branch on a specific date. This can be viewed as a small sample of a huge data set.
Lets import pandas and print the data set.
1 2 3
import pandas as pd df = pd.read_csv('Downloads/inventory.csv') print(df)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 5 9/14/2019 NaN 238 300 44 4565 ma 100 18 9/10/2019 NaN 998 900 33 2233 fl 100
Methods for dealing with missing data include deleting, imputing, or predicting.
1. Delete. One of the easier ways to address null values in your data is just to drop them. This is a preferred approach if the null values are relatively smaller in number. If not we might be losing some meaningful information by dropping them. Pandas have dropna() function that can be used in dropping all null values. In our sample data, there are two rows with null values.
1
df.dropna()
Once we execute the above statements, you can see the resulting data with null rows being removed.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
2. Impute. There are cases where we cannot afford to drop existing data, especially if the sample size of the data is relatively small or if the ratio of null values is relatively high. In these cases, we need to impute missing data, and different strategies are available for that. Some of the commonly used approaches for continuous data are the mean/average, median, or mode value of the features. For categorical data, the mode is always the preferred approach. Pandas have fillna() method to accomplish this.
3. Predict. This is the scenario where we cannot afford to guess the wrong value. Instead of imputing random values, we predict the values using machine learning algorithms. We use a regression model to predict continuous data and a classification model for categorical data.
While preparing the data, we must look out for extreme values. Some of these values may be genuine cases, but some could be erroneous. The presence of outliers would significantly affect the modeling process and hence the prediction accuracy.
For example, in the data above you can see that observation (row) 17 has an extreme price. Looking at other data, it seems there is a high possibility that this could be a user error.
The z-score of observation is a common way to detect outliers. To calculate z-score of the 'Amount' feature, we will use the following formula.
1 2 3
zscore = (Amount - Mean Amount) ---------------------------- Standard Deviation of Amount
We can set a specific threshold for standard deviation (> 2.0 or 2.5), and once the z-score exceeds this value, we can safely reject values as outliers. Let's compute the zscore of the 'Amount' feature and plot a graph.
1 2
df['Amount_zscore'] = (df['Amount'] - df['Amount'].mean())/df['Amount'].std() print(df['Amount_zscore'])
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
0 0.508907 1 0.930321 2 -0.492328 3 -0.591836 4 -0.591643 6 -0.540874 7 0.579064 8 -0.595888 9 -0.595547 10 -0.591946 11 -0.580130 12 -0.595468 13 -0.591794 14 1.730032 15 -0.523167 16 -0.262655 17 2.804950
1
df['Amount_zscore'].plot.kde()
This is one technique used to detect and eliminate outliers. There are other techniques, like Dbscan and Isolation Forest, that can be used based on particular data. The explanation of these is beyond the scope of this article.
When you have data with multiple features and each has a different unit of measurement, there is a high possibility of skewed data. Because of this, it's important that we convert all possible features to the same standard scale. This technique is called normalizing, or feature scaling.
For example, in our data, inventory quantities range from 1 to 1000, whereas cost ranges from 1 to 10000.
Min-Max scaling is a commonly used technique in normalizing the data. The formula is:
1 2 3
Feature(Normalized) = (Feature Value - Min. Feature Value) -------------------------------------------- (Max. Feature Value - Min. Feature Value)
It's important to apply the outlier technique mentioned above before normalizing the data. Otherwise, you will run the risk of skewing the normal values in your data to a small interval.
Data is not always numerical. You may have, for example, text data that is categorical. Referring to our dataset, though most of the features are numerical, "branch loc" refers to the state in which a specific branch is located, and it contains text data. As part of preparing the data, it's important to convert this feature to numerical data. There are many ways you can do this. We are going to use a "Label Encoding" technique.
1 2 3
df["branch loc"] = df["branch loc"].astype('category') df["branch_loc_cat"] = df["branch loc"].cat.codes print (df[["branch loc", "branch_loc_cat"]])
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
branch loc branch_loc_cat 0 tx 7 1 tx 7 2 il 1 3 ma 2 4 ma 2 6 tx 7 7 il 1 8 il 1 9 ca 0 10 ca 0 11 ca 0 12 ri 5 13 ny 4 14 ny 4 15 tx 7 16 sc 6 17 nc 3
You can see that for each state, a numerical value has been assigned. We first converted this feature to a categorical type before applying categorical codes to it.
Feature engineering requires expertise both in domain knowledge and technical knowledge. Having too few features in the data may result in a poorly performing model, and too many features may result in an overly complex model. When there are too many features in the training data, it's highly possible that the model may be over-fitting the data — that is, performing accurately on training data but poorly on new, untrained test data. It's important to select an optimal number of features that help us design a better performing model. Selecting an optimal set of features is a two-step process :
Having too many features in the data will increase the complexity of the model prediction, training time, and computation cost, and it may decrease the accuracy of the prediction because of too many variables. So it is advisable to reduce the number of features that can yield optimal accuracy in the prediction process. One technique for doing this is dimensionality reduction. It is achieved in two different ways:
There are many other techniques used to accomplish this. Some of the most common are:
Principal Component Analysis (PCA)
Random Forest
Low Variance Filter
High Correlation Filter
For business problems like credit card fraud detection, spam filtering, or medical diagnosis, the actual data could be less than 1% of the actual sample size. In cases like this, we need to be careful not to reject accurate data as noise or outliers. If we use accuracy as a performance metric for this model, it's obvious that the model will predict every credit card transaction with 99% accuracy. But businesses are more concerned about the 1% of false credit card transactions. So, accuracy is not the best performance metric in this case, and we may need to rely on other performance metrics like precision, recall, and sensitivity. All these metrics can be derived once we have tabulated the confusion matrix.
Some of the commonly used techniques to improve prediction score include:
Though this guide addresses the important aspects of preparing the data, it is just the tip of the iceberg. Multiple other approaches and techniques are available depending on the type of data. The Python ecosystem offers multiple libraries that are well equipped to address this optimally. This is also an iterative process, and you may need to go through some of these steps more than once to fine-tune your data to prepare a better-predicting model.
0
Test your skills. Learn something new. Get help. Repeat.Start a FREE 10-day trial | https://www.pluralsight.com/guides/preparing-data-machine-learning | CC-MAIN-2019-47 | refinedweb | 1,651 | 64.81 |
Estimate a scalar ARMA process¶
The objective here is to estimate an ARMA model from a scalar stationary time series using the Whittle estimator and a centered normal white noise.
The data can be a unique time series or several time series collected in a process sample.
If the user specifies the order
, OpenTURNS fits a model
to the data by estimating the coefficients
and the variance
of the white noise.
If the User specifies a range of orders
, where
and
, We find the best model
that fits to the data and estimates the corresponding coefficients.
We proceed as follows:
the object WhittleFactory is created with either a specified order
or a range
. By default, the Welch estimator (object Welch) is used with its default parameters.
for each order
, the estimation of the parameters is done by maximizing the reduced equation of the Whittle likelihood function ([lik2]), thanks to the method build of the object WhittleFactory. This method applies to a time series or a process sample. If the user wants to get the quantified criteria
and BIC of the model
, he has to specify it by giving a Point of size 0 (Point()) as input parameter of the method build.
the output of the estimation is, in all the cases, one unique ARMA: the ARMA with the specified order or the optimal one with respect to the
criterion.
in the case of a range
, the user can get all the estimated models thanks to the method getHistory of the object WhittleFactory. If the build has been parameterized by a Point of size 0, the user also has access to all the quantified criteria.
The synthetic data is generated using the following 1-d ARMA process:
with the noise
defined as:
[5]:
from __future__ import print_function import openturns as ot import matplotlib.pyplot as plt ot.RandomGenerator.SetSeed(0)
[6]:
# Create an arma process tMin = 0.0 n = 1000 timeStep = 0.1 myTimeGrid = ot.RegularGrid(tMin, timeStep, n)]) arma = ot.ARMA(myARCoef, myMACoef, myWhiteNoise) tseries = ot.TimeSeries(arma.getRealization()) # Create a sample of N time series from the process N = 100 sample = arma.getSample(N)
[7]:
# CASE 1 : we specify a (p,q) order # Specify the order (p,q) p = 4 q = 2 # Create the estimator factory = ot.WhittleFactory(p, q) print("Default spectral model factory = ", factory.getSpectralModelFactory()) # To set the spectral model factory # For example, set WelchFactory as SpectralModelFactory # with the Hanning filtering window # The Welch estimator splits the time series in four blocs without overlap myFilteringWindow = ot.Hanning() mySpectralFactory = ot.WelchFactory(myFilteringWindow, 4, 0) factory.setSpectralModelFactory(mySpectralFactory) print("New spectral model factory = ", factory.getSpectralModelFactory()) # Estimate the ARMA model from a time series # To get the quantified AICc, AIC and BIC criteria arma42, criterion = factory.buildWithCriteria(tseries) AICc, AIC, BIC = criterion[0:3] print('AICc=', AICc, 'AIC=', AIC, 'BIC=', BIC) arma42
Default spectral model factory = class=WelchFactory window = class=FilteringWindows implementation=class=Hamming blockNumber = 1 overlap = 0 New spectral model factory = class=WelchFactory window = class=FilteringWindows implementation=class=Hanning blockNumber = 4 overlap = 0 AICc= 771.8917262722518 AIC= 770.9344613149868 BIC= 824.530853637219
[7]:
ARMA(X_{0,t} - 0.214424 X_{0,t-1} + 0.432622 X_{0,t-2} + 0.203859 X_{0,t-3} + 0.0512422 X_{0,t-4} = E_{0,t} - 0.194383 E_{0,t-1} + 0.461067 E_{0,t-2}, E_t ~ Normal(mu = 0, sigma = 0.406619))
[8]:
# CASE 2 : we specify a range of (p,q) orders ################################### # Range for p pIndices = [1, 2, 4] # Range for q = [4,5,6] qIndices = [4, 5, 6] # Build a Whittle factory with default SpectralModelFactory (WelchFactory) # this time using ranges of order p and q factory_range = ot.WhittleFactory(pIndices, qIndices) # Estimate the arma model from a process sample arma_range, criterion = factory_range.buildWithCriteria(sample) AICc, AIC, BIC = criterion[0:3] print('AICc=', AICc, 'AIC=', AIC, 'BIC=', BIC) arma_range
AICc= 4443.4456045627585 AIC= 4443.217962286336 BIC= 4516.222475664246
[8]:
ARMA(X_{0,t} + 0.382771 X_{0,t-1} + 0.185752 X_{0,t-2} = E_{0,t} + 0.385312 E_{0,t-1} + 0.192682 E_{0,t-2} - 0.191497 E_{0,t-3} - 0.102842 E_{0,t-4}, E_t ~ Normal(mu = 0, sigma = 0.409595))
[9]:
# Results exploitation # Get the white noise of the (best) estimated arma arma_range.getWhiteNoise()
[9]:
WhiteNoise(Normal(mu = 0, sigma = 0.409595)) | http://openturns.github.io/openturns/master/examples/data_analysis/estimate_arma.html | CC-MAIN-2019-35 | refinedweb | 723 | 51.14 |
Behind every great web app is a great API. In this article, I'll show you how to design a RESTful API and build it with Python and Flask.
The code used in this article can be found on GitHub. When you see a caption, such as Checkpoint: 01_sighting_model, it means that you can switch to the branch named "01_sighting_model" and review what the code looks like at that point in the article.
A Bit of Jargon
An API is a collection of functions that allow a software program to access data from an application. The software program can read or change the application's data by calling these functions.
REST stands for Representational State Transfer. It is a set of rules that clients and servers must follow when communicating with each other. If a communications system follows these rules, it is called RESTful. The most famous RESTful communications protocol is HTTP. This means that the Web itself is built following the REST approach.
In a RESTful service, a URL should identify a piece of information that the user might want to interact with. Each piece of information is termed a resource. Resources can be retrieved and changed using the HTTP verbs. Here are four of them and their effect on a resource.
- Creates a new resource
- GET
- Reads a resource
- PUT
- Updates an existing resource
- DELETE
- Deletes a resource
These are simple concepts that you use every day on the Web. A web page is a resource that is represented by a URL. When you type a URL into your address bar and press Enter, your browser issues a GET request on the web page identified by the URL. The host server sends a response back to your browser with the web page so that you can read the resource. Likewise, when you submit a form, your browser issues a POST request on the form page so that a new resource can be created on the host server.
Flask is a web framework for Python that is easy to use and simple to learn. I've written on how to create a simple Flask app containing static pages and a contact page, so check those out for a deeper look at Flask.
Choosing a Dataset
We'll make a places API in this article. The data comes from Infochimps's UFO Sightings dataset (I couldn't pass up using it). The dataset has more than 60,000 accounts of UFO sightings reported by location. Here's a sample of what it looks like:
Here's a description of each column:
- sighted_at - The date the sighting occurred, formatted as yyyymmdd
- reported_at - The date the sighting was reported, formated as yyyymmdd
- location - The city and state where the sighting occurred
- shape - A description of the UFO shape
- duration - The duration of the sighting
- description - A description of the sighting
I geocoded each location field into a latitude and longitude, so here are the same five rows of the dataset with that modification. The full geocoded version of the dataset can be found here, and I'll use this version for the rest of the article.
Designing the API
Now that we have a dataset, we need to make it available so that developers can build on top of it.
One way to do this is to implement a couple functions for the developer to use. Here are some ideas:
/getSightingwill return a specific sighting
/findSightingsNearLocationwill get all sightings within a certain radius distance of a given latitude and longitude, perhaps the user's current location
/addNewSightingwill update the database with a new extraterrestrial event
/getSightingsLongerThanwill filter on the duration column to get all sightings longer than some time interval, say 10 minutes
/sightingsTodaywill indicate whether any UFOs have been spotted today
/sightingsSincewill return all sightings after some date
- ...
The problem with this design is that it is arbitrary and inconsistent. You could have just as easily named the URL in the second bullet
/getSightingsNearMe. Furthermore, there is no evident pattern to the URLs - some have verbs while others don't, some perform more specific operations than others, etc. These problems will compound themselves as the API grows in the future and more functions are added. The net result is an API that is difficult to use. Said more bluntly, this is bad API design.
We need to keep things simple and consistent. We can do so by following the REST guidelines. In the design above, I was trying to use URLs to describe actions on resources. The REST approach sets a distinction that URLs should be used only to identify resources, while HTTP verbs should be used to specify actions on those resources. Let's maintain this distinction and design a better API.
In a RESTful API, only two URLs are needed per resource. The first URL is for a collection, which for the UFO sightings dataset is
/sightings/. The second URL is for a specific element, which in our case is
/sightings/<id>.
When we apply the HTTP verbs to these two resources, we get the following grid:
The grid describes the effect each HTTP verb has on the two resources. For instance
GET /sightings/ should return a list of all UFO sightings while
GET /sightings/<id> should display that specific sighting.
The grid is not perfect. First,
POST /sightings/<id> translates to "add a new sighting to
<id>." This doesn't make sense and therefore should just return an error. It makes more sense to add a new sighting to the collection of sightings, so
POST /sightings/ should be allowed. Likewise, if you try to update an
<id> that does not exist in the database by doing
POST /sightings/<id>, it should return an error to avoid unexpected behavior.
Overall the grid lends consistency to the API design. The URLs only represent sightings. They no longer contain arbitrary verbs. Instead, the standard verbs that HTTP provides are used to retrieve and change sightings.
But what about more specific operations, like finding all sightings in a certain location or retrieving all sightings with a duration longer than n minutes? If we add more URLs to express these operations, we'll get back into bad API design territory. Instead, let's leave the two URLs
/sightings/ and
/sightings/<id> alone and add these possibilities as optional parameters in a query string behind the
? separator. For example if we wanted to get a list of 10 sightings starting at record 20, we could do
GET /sightings/?limit=10&offset=20. If we wanted to get a list of all sightings within a 10 mile radius of a latitude and longitude, we could do
GET /sightings/?location=lat,lng&radius=10.
In this article, I'll show you how to implement the GET column of the RESTful grid. In other words, I'll show you how to make a read-only API. I'll also choose JSON as the format for the API to return.
Implementing the API
There were a lot of details in the section above. Fortunately, Flask makes it a cinch to translate the API design into an implementation, so let's get started.
Install MySQL
First let's install MySQL to manage the raw UFO sightings data. Check to see if your system already has MySQL installed:
$ mysql --version
If a version number shows up, you're good to go and can skip to the next section. If not, you'll need to install MySQL on your machine. There are several Googleable articles that provide machine-specific installation instructions better than I could, so I'll defer to them. The installation usually consists of running a command or an executable. For example on Ubuntu it is:
$ sudo apt-get install mysql-server mysql-client
Set Up MySQL
Once MySQL is installed, log in with the
--local-infile option set to 1 so that we can import data from a local text file. Make sure to fill in your MySQL username and password.
$ mysql --local-infile=1 -u username -p
Next, create a new database named "ufosightings" and a table inside of it named "sightings."
> CREATE DATABASE ufosightings; > USE ufosightings; > CREATE TABLE sightings ( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, sighted_at INT(8), reported_at INT(8), location VARCHAR(100), shape VARCHAR(10), duration VARCHAR(10), description TEXT, lat FLOAT(10, 6), lng FLOAT(10, 6) ) ENGINE = MYISAM CHARACTER SET utf8 COLLATE utf8_general_ci;
Download the raw text file of UFO sightings and import it into the "sightings" table. The import is a sweet one-liner. Make sure that you change the file path to where you downloaded the file.
> LOAD DATA LOCAL INFILE '/path/sightings.tsv' INTO TABLE sightings FIELDS TERMINATED BY '\t' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Install Flask
Create a new folder named sightings/ for the Flask app.
$ mkdir sightings $ cd sightings/
Next, let's use virtualenv to create an isolated Python development environment where we can do all our development work. This will create a folder venv/ which contains the package manager pip and a clean copy of Python.
$ virtualenv venv
Now let's activate our development environment and safely install Flask inside of it using pip.
$ . venv/bin/activate $ pip install flask
Install Flask-SQLAlchemy
Instead of shipping with extra functionality by default, Flask lets you add it on as you need it by using extensions. A Flask extension is a package that adds a specific functionality to your app. There are several Flask extensions available for you to use. We'll use Flask-SQLAlchemy which adds database support. Let's install it now.
$ pip install flask-sqlalchemy
We'll also need a library called MySQLdb so that Python can interface with MySQL, so let's install that too.
$ pip install mysql-python
Configure Flask and Flask-SQLAlchemy
Now that all the installations are complete, the next step is to create a new Flask application and connect it to MySQL using Flask-SQLAlchemy. Let's create a file inside sightings/ named routes.py and make this happen.
sightings/routes.py
from flask import Flask, request from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) db = SQLAlchemy(app) app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://your-username:your-password@localhost/ufosightings' if __name__ == '__main__': app.run(debug=True)
- First we imported the main Flask class and a class named
request.
requestwill let us distinguish between the HTTP verbs.
- Next we imported the SQLAlchemy class.
- Then we made a variable
appcontaining a usable instance of the Flask class and a variable
dbcontaining usable instance of the SQLAlchemy class
- Next we connected the Flask app to the MySQL database "ufosightings." We did so by specifying our username, password, server, and desired database as a data URI. Since we're developing locally, the server is localhost. Make sure to change the MySQL username and password to your credentials.
- Finally, we used the function
app.run()so that we can run our app on a local server later. We'll set the debug flag to 1 so that we can see error messages if anything goes wrong and so that the server automatically reloads after we make changes to the code.
-- Checkpoint: 00_configuration --
The UFO sightings data lives in the "sightings" table inside the "ufosightings" database. Let's create a model representing this table so that we can query it from our Flask app.
sightings/routes.py
from flask import Flask, request . . . class Sighting(db.Model): __tablename__ = 'sightings' id = db.Column(db.Integer, primary_key = True) sighted_at = db.Column(db.Integer) reported_at = db.Column(db.Integer) location = db.Column(db.String(100)) shape = db.Column(db.String(10)) duration = db.Column(db.String(10)) description = db.Column(db.Text) lat = db.Column(db.Float(6)) lng = db.Column(db.Float(6)) if __name__ == '__main__': ...
We added a new class named Sighting to routes.py and defined an attribute for each column in the "sightings" table. That's all there is to the model. This class will now be our interface to query the "sightings" table.
-- Checkpoint: 01_sighting_model --
Set Up URLs
From the API design above, the two URLs we want to support are
/sightings/ and
sighting/<id>. Let's start with the URL
/sightings/ and create a new URL mapping for it inside routes.py
sightings/routes.py
from flask import Flask, request, jsonify . . . @app.route('/sightings/', methods=['GET']) def sightings(): if request.method == 'GET': results = Sighting.query.limit(10).offset(0).all() json_results = [] for result in results: d = {'sighted_at': result.sighted_at, 'reported_at': result.reported_at, 'location': result.location, 'shape': result.shape, 'duration': result.duration, 'description': result.description, 'lat': result.lat, 'lng': result.lng} json_results.append(d) return jsonify(items=json_results) if __name__ == '__main__': ...
- We start by importing a Flask function named
jsonify(). Our API will support JSON, so we can use
jsonify()to return a JSON response to the browser.
- Next we map the URL
/sightings/to the function
sightings(). When this URL is visited, the
sightings()function will be called. Notice that we've also specified that we want this URL to respond to the HTTP verb GET. Since this is a list, you can simply append the other HTTP verbs that you want to support, eg
['GET', 'POST']. I'll just support GET for this read-only API.
- Inside
sightings()we determine whether a GET request has been issued to the URL. You can support more verbs by using an
if..elif..elsestatement, similar to this example.
- Once we've determined that a GET request has been issued, we use Flask-SQLAlchemy's query syntax to retrieve the first 10 records from the "sightings" table. The equivalent SQL statement is
SELECT * from sightings LIMIT 10 OFFSET 0;
- The variable
resultsnow contains a list of records returned from the database query. Each record is a SQLAlchemy object. We'll need to convert this list of SQLAlchemy objects into a JSON list so that we can pass it to
jsonify(). This is what the next block in
sightings()does; it creates a dictionary out of each record in
results, and stores a list of dictionaries in the variable
json_results
- Finally, we pass the records in
json_resultsto the
jsonify()function in order to return JSON in the browser.
We're now ready to see the output of all our hard work. Go to the command line and type
$ python routes.py
Visit in your favorite web browser.
That's some good-looking JSON! When we visited, the code in routes.py mapped the URL
/sightings/ to the function
sightings().
sightings() issued a database query for the first 10 records of the "sightings" table and returned it to us as JSON in the browser.
-- Checkpoint: 02_sightings_collection --
Let's do the same thing for the second URL
sightings/<id>. Open up routes.py and create a new URL mapping.
sightings/routes.py
from flask import Flask, jsonify . . . @app.route('/sightings/<int:sighting_id>', methods=['GET']) def sighting(sighting_id): if request.method == 'GET': result = Sighting.query.filter_by(id=sighting_id).first() json_result = {'sighted_at': result.sighted_at, 'reported_at': result.reported_at, 'location': result.location, 'shape': result.shape, 'duration': result.duration, 'description': result.description, 'lat': result.lat, 'lng': result.lng} return jsonify(items=json_result) if __name__ == '__main__': ...
This mapping looks very similar to what we did for the
/sightings/ URL. Let's go through it line by line.
- Flask lets you capture variables in a URL by using the syntax
/url/<variable_name>. We'll use this to show a sighting with the given ID. Looking at routes.py we start by creating a URL for
/sightings/<int:sighting_id>. We'll make sure that
sighting_idis an integer by using the converter
int. Again I'm only supporting GET requests, but you can easily add more HTTP verbs to the list.
- Next we map the URL
/sightings/<int:sighting_id>to function
sighting(), which takes the
sighting_idas an input.
- Inside
sighting(), we run a query to retrieve the sighting with the given ID. For example,
Sighting.query.filter_by(id=1).first()would be equivalent to the SQL statement
SELECT * from sightings WHERE id=1;
- The query result is a SQLAlchemy object, so we convert it to a dictionary just like we did in the
sightings()function, and then return it to the browser using
jsonify().
Switch back to the browser and go to
When we visited, the URL
/sightings/<int:sighting_id> captured the number
1 and stored it in the variable
sighting_id.
sighting_id was passed to the function
sighting(), which queried the "sightings" table for the record with the matching id. The result was returned to us as JSON in the browser.
-- Checkpoint: 03_sighting_element --
We've accomplished a lot so far. We implemented the two URLs for our API, supported GET requests, and returned that data as JSON. The last item remaining is to make more specific operations possible by adding optional parameters in a query string.
Query Parameters
In the
sightings() function in routes.py, we've currently hardcoded the limit to 10 and the offset to 0. Let's make it possible for the developer to control how many records to return by passing his own limit and offset as optional parameters. Flask makes it possible to collect parameters from the URL like this:
lim = request.args.get('limit', 10)
If a request for
/sightings/?limit=20 is made, then
lim would be set to 20. If the
limit parameter is not provided in the URL, then
lim would default to 10.
Let's incorporate this into the
sightings() function.
sightings/routes.py
@app.route('/sightings/', methods=['GET']) def sightings(): if request.method == 'GET': lim = request.args.get('limit', 10) off = request.args.get('offset', 0) results = Sighting.query.limit(lim).offset(off).all() json_results = [] . . .
The variables
lim and
off will read the values of the
limit and
offset parameters, if they exist. Otherwise they will default to 10 and 0, respectively.
lim and
off are then passed to the database query to return that portion of the sightings records.
Open up the browser and visit. You'll see a list of three sightings starting at record 30. Specify your own limit and offset to change the result window.
-- Checkpoint: 04_limit_offset --
Let's add one more specific operation. It would be really cool to return a list of UFO sightings around a certain location. For example, a request for
/sightings/?location=33.3942655,-104.5230242&radius=10 would return a list of sightings within a 10 mile radius of Roswell, NM.
Let's figure out the database query first. We need some way to select all sightings around a latitude and longitude. The good folks of the Google Geo APIs Team have shown the way to do this. Here is their SQL statement and its description from their page.
SELECT id, ( 3959 * acos( cos( radians(37) ) * cos( radians( lat ) ) * cos( radians( lng ) - radians(-122) ) + sin( radians(37) ) * sin( radians( lat ) ) ) ) AS distance FROM markers HAVING distance < 25 ORDER BY distance LIMIT 0 , 20;
Here's the SQL statement that will find the closest 20 locations that are within a radius of 25 miles to the 37, -122 coordinate. It calculates the distance based on the latitude/longitude of that row and the target latitude/longitude, and then asks for only rows where the distance value is less than 25, orders the whole query by distance, and limits it to 20 results. To search by kilometers instead of miles, replace 3959 with 6371.
Let's adapt this SQL statement for our needs. Currently the statement is hardcoded for the coordinates (37, -122), a radius of 25 miles, and a limit of 20 results. Let's replace them with variables that we can fill in with values from the URL query string.}
Now the hardcoded values have been replaced with keys which fetch their values from the string substitution dictionary. Let's go ahead and incorporate this into routes.py
sightings/routes.py
python @app.route('/sightings/', methods=['GET']) def sightings(): if request.method == 'GET': lim = request.args.get('limit', 10) off = request.args.get('offset', 0) radius = request.args.get('radius', 10) location = request.args.get('location', ',') lat, lng = location.split(',') if lat and lng and radius:} results = Sighting.query.from_statement(query).all() else: results = Sighting.query.limit(lim).offset(off).all() json_results = [] . . .
- Just as we did for the
limitand
offsetparameters, we start by collecting parameters for
radiusand
location, if they exist. The value of the location parameter is a field where latitude and longitude are separated by a comma. I chose to group latitude and longitude this way because they should always be submitted as a pair to the API.
- Next, if the
latand
lngvalues from the
locationparameter, and the
radiusparameter all exist, then we should prepare the custom SQL statement and execute it. The function
from_statement()gives the ability to run raw SQL statements.
- If
lat,
lng, or
radiusdo not exist, then we should execute the SQL query we had before
That should do it. Open up the browser and try it out for yourself. Remember that
limit parameter can still be used to customize the result window. For example returns the closest three UFO sightings that are within a 25 mile radius of San Francisco, CA.
-- Checkpoint: 05_location_radius --
Next Steps
APIs are awesome. By following the REST guidelines, you can design a logical and powerful web service that others can use. In this article, we built a RESTful API using Python and Flask. The concepts can be extended to create other more complex APIs.
There are several directions to go from here. Some ideas:
- I just showed how to support GET requests in the API. You could implement the rest of the RESTful grid by adding POST, PUT, and DELETE requests to your API.
- I chose to make a JSON API. XML is another commonly used format, so you could make your API available in XML as well so that the developer can choose which format works best for him.
- (My favorite) Create your own API from scratch. Find a cool dataset or use one that you already own and build an API on top of it. Some good sources for finding datasets are:
So go forth, build a sweet API, and stay RESTful! | http://tech.pro/tutorial/1213/how-to-build-an-api-with-python-and-flask | CC-MAIN-2014-52 | refinedweb | 3,690 | 66.33 |
35 Advanced Tutorials to Learn Kubernetes — Faun
If you are serious about learning Kubernetes, you should focus on in-depth tutorials and practical use cases. It’s not hard to find beginner contents about Kubernetes. In this list, we will identify some good posts to learn Kubernetes. We created another list of the best stories Faun members written in publication, you can check it here.
This ranking does not follow any particular order.
Kubernetes — flat, NAT-less Networking
So, how exactly does the does the Kubernetes achieves the flat and NAT-less network for an inter-pod communication? Well, not surprisingly it doesn’t. As it requires the Container Network Interface (CNI) plugin to set up the network.
Kubernetes Security — Are your Container Doors Open?
Container adoption in IT industry is on a dramatic growth. The surge in container adoption is the driving force behind the eagerness to get on board with the most popular orchestration platform around, organizations are jumping on the Kubernetes bandwagon to orchestrate and gauge their container workloads. It allows for continuous integration and delivery; handles networking, service discovery, and storage; and has the capability to do all that in multi-cloud environments.
Writing Your First Kubernetes Operator
In this article, we’ll see how to build and deploy your first Kubernetes Operator using the Operator SDK.
Using Kubernetes for Local Development — Minikube
If your ops team is using Docker and Kubernetes, it is recommended to adopt the same or similar technologies in development. This will reduce the number of incompatibility and portability problems and makes everyone consider the application contains a common responsibility of both Dev and Ops teams.
Production.
Configuring HA Kubernetes cluster on bare metal servers with kubeadm.
..
A Gentle Introduction to Kubernetes
In this story, we’re going to use learn how to deploy Kubernetes services and Ambassador API gateway. We are going to examine the difference between Kubernetes proxies and service mesh like Istio. We will see how to access the Kubernetes API and discover some security pitfalls when building Docker images and many interesting things.
An Overall View On Docker Ecosystem — Containers, Moby, Swarm, Linuxkit, containerd & Kubernetes
The goal of this blog post (and the video) is sharing an overall view of container technologies. We’re not going through many technical details, instead of that, we’re going to have a global view on containers and Docker.
We’ve seen a lot of changes in Docker since its first version and this could be confusing for engineers and developers trying to learn this technology.
That’s why we’re going to see different concepts from the containers ecosystem, the relationship between them, an introduction to Docker as well as its most important milestones until 2018.
Google Kubernetes Engine; Explain Like I’m Five!
Creating your first managed Kubernetes cluster on Google Kubernetes Engine using Terraform, this is what we are going to cover in this tutorial.
Docker.
Security problems of Kops default deployments
We have all seen the typical tutorial demonstrating how easy it is to deploy a Kubernetes Cluster on AWS using Kops. It’s almost a one-liner. In this article I’ll start from this one-liner and demonstrate several security flaws of the default deployment.
Exploiting theses flaws gives an attacker full cluster control. This means that a single app compromise becomes a full cluster compromise.
When thinking about security, you must think about defense in depth. The system must be conceived so that a breach somewhere can be contained. For a container management platform, this means that a breached container shouldn’t be able to access its host or other containers.
In this article I’ll show that a default Kops setup has no security in depth and how an attacker can use a single compromised container to gain access to everything in the cluster. I’ll also show you how to handle and fix theses security flaws to the best of my knowledge.
Playing with AKS & AAD
This quick workshop will walk you through configuring AKS with AAD to use RBAC to authenticate users.
Istio step-by-step, a 12-part series
Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more. Istio supports services by deploying a special sidecar proxy throughout the environment that intercepts all network communications between micro-services, then configure and manage Istio using control plane functionality which includes,
Automatic load balancing for HTTP, gRPC, WebSocket and TCP traffic.
** gRPC — a modern open-source high-performance RPC framework that can run in any environment
Fine-grained control of traffic behaviour with rich routing rules, retries, failovers and fault injection
A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
Automatic metrics, logs and traces for all traffic within a cluster, including cluster ingress and egress.
** cluster ingress — a collection of rules that allow inbound connections to reach the cluster services.
** cluster egress — a collection of rules that allow outbound connections to reach the cluster services.
Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
Highly Available and Scalable Elasticsearch on Kubernetes
In the previous post we learned about Stateful Sets by scaling a MongoDB Replica Set. In this post we will be orchaestrating a HA Elasticsearch cluster ( with different Master, Data and Client nodes ) along with ES-HQ and Kibana
How to Pass Certified Kubernetes Administrator (CKA) exam on first Attempt
I have recently had the opportunity to take the CKA Exam a few weeks ago and managed to clear it on my first attempt. I’ve complied few tips on how to clear CKA exam on your first attempt too.
Why I no longer use Terraform for Templating Kubernetes
One of the most important principles to being an engineer, is being able to admit when you are wrong. Well folks. I was wrong. Some of you may have read my previous blog post about Templating k8s with Terraform. Well since this time, I have come to understand the value of helm. If you recall this is a big transition from my earlier sentiments of “I have never understood the value of Helm”.
Scaling MongoDB on Kubernetes
We will deploy the following components for our MongoDB cluster:
- Daemon Set to configure HostVM
- Service Account and ClusterRole Binding for Mongo Pods
- Storage Class to provision persistent SSDs for the Pods
- Headless Service to access to Mongo Containers
- Mongo Pods Stateful Set
- GCP Internal LB to access MongoDB from outside the kuberntes cluster (Optional)
- Access to pods using Ingress (Optional)
How to Setup a Perfect Kubernetes Cluster using KOPS on AWS
by Krishna Modi
Kubernetes is currently the most popular container orchestration system and has definitely gained that popularity because of the amazing features and ease of container automation. Even though Kubernetes automates most of the container lifecycle processes, setting up a Kubernetes cluster has been a big pain point. With Kops, it makes setting up a cluster so darn easy that it just works without much hassle!
Even though Kops makes it a cake walk to create Kubernetes cluster, there are some best practices we need to ensure so that we create an optimal K8S cluster.
Today, I’ll walk you through the detailed steps to create a Kubernetes cluster with 3 master nodes and 2 worker nodes with 1 AWS On-demand instance and 1 AWS Spot instance within a private topology with multi-availability zones deployment.
How to Deploy a Express Node.js app on Kubernetes
Whileised applications.
Kubernetes Headless service vs ClusterIP and traffic distribution
Default Kubernetes service type is
clusterIP, When you create a headless service by setting clusterIP
None, no load-balancing is done and no cluster IP is allocated for this service. Only DNS is automatically configured. When you run a DNS query for headless service, you will get the list of the Pods IPs and usually client dns chooses the first DNS record.
How to Create an API Gateway using Ambassador on Kubernetes
by Krishna Modi
API Gateway is an important aspect in your Kubernetes deployment for your services. It acts as an single entry point and can help simplify a lot of tasks like Service Discovery, Distributed Tracing, Routing, Rate Limiting. It can offer a great flexibility and better configuration for your services.
Envoy is one of the very popular API gateways currently available which can handle extensive loads. With Kubernetes, Ambassador is the most popular and efficient way to use Envoy.
Today, I’ll walk you through the detailed steps to deploy Ambassador on a Kubernetes cluster we deployed in my previous post, and configure it to use AWS load balancer for incoming traffic and route it to various services based on rules.
Learning Kubernetes on EKS by Doing
by Haimo Zhang
Other Parts of this blog series :
Learning Kubernetes on EKS by Doing Part 1 — Setting up EKS
Learning Kubernetes on EKS by Doing Part 2 — Pods, ReplicaSets and Deployments
Learning Kubernetes on EKS by Doing Part 3 — Services
Learning Kubernetes on EKS by Doing Part 4 — Ingress
Learning Kubernetes on EKS by Doing Part 5 — ConfigMaps and Secrets
Learning Kubernetes on EKS by Doing Part 6 — StatefulSets
How to inject chaos on Kubernetes resources using LitmusChaos
by Raj Babu Das
It’s very difficult to handle failures when your application is in production. If your application somehow fails while in production, it cost you a lot. So, your application should be fault-tolerant to handle this kind of situations. Here is the solution for this kind of situations.
In this is a hands-on tutorial, I am going to inject chaos on any Kubernetes resources to check its fault-tolerant state. Here I am using a chaos tool i.e LitmusChaos for this tutorial. so, here we start.
In this article, I will show you how to run Istio on Kubernetes. Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. Istio supports managing traffic flows between microservices, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code
Rancher : One place for all Kubernetes Clusters
Rancher is open-source software for delivering Kubernetes-as-a-Service.
Managing kubernetes clusters running in different cloud providers was never an easy task until Rancher came. So what exactly is Rancher , Rancher is an open source platform where you can create the cluster in different clouds or import an existing one as well. Today I will tell you how to spin up a kubernetes cluster in Google cloud, AWS Cloud and how to import a cluster from Oracle Cloud. All these three clusters you will be able to see and manage from one place itself which is nothing but Rancher Dashboard. Rancher have wide variety of tools and the company is coming up with more and more cool open source projects including the k3os.io that they recently launched . I will show you the creation of kubernetes cluster from rancher and how easy monitoring and deployments can be done via Rancher Dashboard.
Application Rollout Strategies — Kubernetes & Istio
As most modern software developers can attest, container orchestration systems such as Kubernetes have provided users with dramatically more flexibility for running cloud-native based applications as microservices on physical and virtual infrastructures. Sometimes deploying an application to all machines in the environment at once, it is better to deploy the application in batches. In production environments, reducing the downtime and risk while releasing newer versions becomes business critical. This is especially critical in production applications with live end-user traffic all the time like a website and related components which are critical to business, such as a e-commerce, banking or investment website.
These applications almost always have frequent deployments. For example, a mobile application or an consumer web application may undergo several changes within a month. Some are even deployed to production multiple times a day. The speed at which you constantly update your application and deploy it’s new features to the users plays a significant role in maintaining consistence.
Application deployment becomes more complicated the more the product grows. Deployment of a new application thus may mean deployment of new infrastructure code with it. A deployment strategy determines the deployment process, and is defined by the deployment configuration that a user provides while hosting the application on Kubernetes.
Ballerina makes Kubernetes more “Boring”
We wanted to have some kind of distinctive way to connect these two worlds. Ballerina Kuburnetes annotation is the outcome of thinking out of the box to solve this problem. Ballerina is an open-source programming language optimized to write cloud-native applications such as microservice. It is intended to be the core of a language-centric middleware platform. It has all the general-purpose functionality expected of a modern programming language, but it also has several unusual aspects that make it particularly suitable for its intended purpose.
Could a Kubernetes Operator become the guardian of your ConfigMaps
All this started as a by-product of a meeting I had recently with a customer and also from a conversation I had with a partner. Both events triggered the need of managing configuration in a kubernetes namespace, and because I have been invo…
Just to give you a bit of context…
The meeting with the customer was focused on new Openshift features around Ops, and most of the time was spent on Kubernetes Operators, why, how, etc. In fact it was conducted as a lab where we used the Prometheus Operator, I’m referring to this short lab. What is curious is that the relevant outcome of the meeting came from a side conversation about their CI/CD pipelines and specifically about configuration management… at that moment I thought what if we use an operator to ensure that configuration is as defined in the git repository.
The conversation with the partner happened around the same week… again a side conversation (how important is wandering around every now and then ;-) this time it was about creating some kind of archetype to speed up the first moves of a new project forced me to develop the operator. The key concept of the side conversation was GitOps, a new concept for me that fitted perfectly with the previous conversation about configuration management.
So I decided to prove that this all made sense… and here’s the result.
How to Setup Scalable Jenkins on Kubernetes
Why do we want to run Jenkins in kubernetes? That might be your first question. You might not find a requirement to run Jenkins in K8s strait away. When your codebase grows larger or run many jobs parallelly, Jenkins will grow very largely. This will slow down your builds and lead to unnecessary resource utilization. Let’s assume that you have 3 Jenkins slaves and each can run 3 job parallelly. Then you can run maximum 9 jobs parallelly, other jobs have to wait. Solution for these is scaling Jenkins.
Scaling Jenkins is vey easy. Jenkins has scaling feature out-fo-the-box. Jenkins comes with master/slave mode. Master is responsible for maintaining jobs, users, configurations, scheduling jobs in slaves and etc… Slaves are Jenkins agents, their primary task is executing jobs scheduled by the master.
Every one knows K8s is container orchestration platform. So I’m going to implement this solution in K8s using its features.
Kubernetes pod security 101
In this blog post, we will look at applying the basic security policies to our deployments within Kubernetes. We are going to only look at three security settings. These really don’t need a massive understanding of the underlying Linux security subsystems like AppArmor or SELinux but give the most bang for your buck when deploying your applications.
This blog post assumes that you already have a Kubernetes cluster up and running. If not you can deploy one on in the cloud with aks or locally with minikube.
Before we jump into the security context that we will set we will first deploy a simple web application that uses the Golang HTTP package to host an HTML file. The source code for the application can be found here
To deploy our web application we will use the following..
Kubernetes vs Docker Swarm. Who’s the bigger and better?
by Cloud_Freak
Container orchestration is fast evolving and Kubernetes and Docker Swarm are the two major players in this field. Both Kubernetes and Docker Swarm are important tools that are used to deploy containers inside a cluster. Kubernetes and Docker Swarm has many their prominent niche USPs and Pros in the field and they are here to stay. Though both of them have quite a different and unique way to meet the goals, at the end of the day their endpoint remains quite near.
Exploring CICD in Kubernetes Cluster — AWS Part
The speech from Kelsey Hightower always ringing into my head as he spoke about installing Kubernetes cluster and giving kubectl is not the end game of this Kubernetes landscape. How the developer can actually focus in coding instead of thinking on how to navigate + manage Kubernetes through kubectl (which is not that easy in the beginning of adoption) is the main focus. So with that in mind we try to create actually an automation (or pipeline) which starts from
- Taking codes from our Git
- and process it (complie, build, etc)
- put it into registry
- and finally deploy it into Cluster.
Deploying and Scaling Jenkins on Kubernetes
by Anup Dubey
In this post, I’m going to set up Jenkins on the Kubernetes cluster. Here I’ll be using Helm to install Jenkins.
Standardizing deployment patterns using Helm Chart
Why standard workflows? Most of the time our workflows are repetitions of what came before us. This means that any webserver will have the same requirements. A cronjob will need a the generic non-functional requirements as any other cron job.
Don’t forget to check the Part II of this series.
Install KubeSphere on GKE cluster
This guide walks you throungh the steps of KubeSphere minimal installation on Google Kubernetes Engine.
KubeSphere is an enterprise-grade multi-tenant container platform that built on Kubernetes , it’s an open source project that supports installing on Linux and Kubernetes . It provides an easy-to-use UI for users to manage Kubernetes.
Connect Deeper
From security, networking to storage and common operations, the above tutorials and stories from our wonderful community are great resources to learn Kubernetes and go beyond the basics.
⚡ Would you like to read more similar content? Subscribe to our newsletters and join our community team-chat on Slack.
⚡⚡ Do you have tutorials and stories you would like to share with us? Use our submission form and we will review and post your stories in this publication.
⚡⚡⚡ Are you looking for more practical use cases to dive deeper into the Kubernetes and the orchestration ocean? You can preorder “Learn Kubernetes by Building 10 Projects” and profit from our time-limited 80% discount! | https://medium.com/faun/35-advanced-tutorials-to-learn-kubernetes-dae5695b1f18 | CC-MAIN-2020-05 | refinedweb | 3,174 | 51.28 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Styling More Views4:38 with Michael Lustig
In this video we'll cover my solution to styling the playback time, song duration, and rounded corner album artwork. Check out this video for the secret trick you need to use custom attributes in your styles!
- 0:00
For the song's current time and duration,
- 0:02
I created a style called PlaybackTimeStyle and wrapped the height and width.
- 0:20
Then I gave it that nice light gray color, a size of 18sp and a normal typeface.
- 0:44
I'll apply this new style to the two text views in the playback controls layout in
- 0:48
the same exact manner as before.
- 1:22
Now the tricky part.
- 1:23
Styling this image view with the rounded corners looks easy on first inspection.
- 1:27
But applying an attribute for
- 1:29
a custom view in styles can potentially cause quite the headache.
- 1:32
In our layout XML, we normally prepend attributes for
- 1:35
custom views, in our case riv_corner_radius,
- 1:40
with the app namespace, which we specify at the top of our layout file.
- 1:44
However, when we move this custom attribute to a style,
- 1:47
the trick we need is to not include the custom name's face declaration at all.
- 1:54
I know this may seem counter intuitive, but let's create the style, apply it, and
- 1:58
then run the app to verify that it indeed works.
- 2:33
Whoops I spelled the,
- 2:35
obviously its not the rix_corner_radius its riv_corner_radius.
- 2:41
Let me go back into styles, fix that typo and we'll run the app one more time.
- 2:56
Awesome, it still works and now if we wanted,
- 2:59
we could reuse that style in other images that we wanted to have rounded corners.
- 3:04
As a bonus, I wanna show you a cool trick that Android Studio can do to make
- 3:07
refactoring our layouts through styles even easier.
- 3:10
If you have a view with a bunch of attributes, you can right click it,
- 3:14
then select Extract Style from the Extract menu, give it a name, and voila.
- 3:18
The IDE will automatically create the style without you having to think
- 3:21
about anything.
- 3:23
It won't pick up certain attributes, but we can add those in manually if we know,
- 3:26
for example, that we're only going to use this style in a constraint layout.
- 3:30
I'll demonstrate this process in the tool bar style from the settings activity so
- 3:33
you can see it in action.
- 3:35
As I mentioned, Android Studio won't automatically grab the layout constraint
- 3:39
attributes, so I'll add those in manually.
- 3:42
I'll use the ever-so-handy selection by columns to create three item opening and
- 3:46
closing tags for each respective layout attribute.
- 3:49
And then I'll use the same column selection by clicking Alt, and
- 3:52
dragging the mouse to select all three lines of the attributes.
- 3:57
Then I'll hold Alt + Shift and
- 3:58
click the right arrow to highlight the entire attribute name and cut them out.
- 4:04
Clicking and holding Alt again and
- 4:05
dragging to select all three columns of item tags,
- 4:08
I'll paste in the attributes and repeat the process for their given values.
- 4:15
That was a mouthful.
- 4:17
Please let me know if you have any questions about that and
- 4:19
we'll be sure to help you understand.
- 4:21
For more information about column selection, and text selection shortcuts in
- 4:25
Android Studio, based on IntelliJ, see the teachers notes.
- 4:28
Now that you've created your own style and learned a nice shortcut to easily create
- 4:31
them from your existing layouts, feel free to explore more of the styles I've created
- 4:35
for the app, and see if you can picture what they'll do. | https://teamtreehouse.com/library/styling-more-views | CC-MAIN-2017-39 | refinedweb | 718 | 75.03 |
You can subscribe to this list here.
Showing
2
results of 2
Support Requests item #3502385, was opened at 2012-03-11 23:44
Message generated for change (Comment added) made by veiokej
You can respond by visiting:
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: MinGW
Group: None
>Status: Open
Priority: 5
Private: No
Submitted By: Veiokej (veiokej)
Assigned to: Nobody/Anonymous (nobody)
Summary: fread() can't read 0x3FF8000 bytes
Initial Comment:
I've seen this bug multiple times, across WinXP reinstalls. I can never read more than 0x3FF7FFF bytes in a single fread(). That's a really weird limint, since it's an odd number. So if nothing else, it looks like something is off-by-one. Fundamentally, why is this the case? It shouldn't take much memory to read (apart from the buffer that the caller has already allocated). This number does kind of look like some memory or data structure limit has been exhausted, in particular, some nice and round looking arbitrary limit that someone established without regard for the OS context.
Granted, this could be a WinXP issue, but I have 2GB of memory.
Any clues? This is annoying because it prevents me from writing code that processes whole files with a single fread(). Perhaps if the bug is in fact in MinGW, then it would affect the MinGW64 project as well.
If you create a test.bin file of more than this many bytes (say, 64MiB), then you can see the bug with this C code, assuming that it happens in your particular configuration:
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define READ_SIZE (0x3FF8000)
int
main(int argc, char *argv[]){
char *string_base;
FILE *handle;
size_t transfer_size;
string_base=malloc((size_t)(READ_SIZE));
if(string_base==NULL){
printf("No memory.\n");
exit(0);
}
handle=fopen("test.bin","rb");
if(handle){
printf("expected transfer_size=0x%08X\n",READ_SIZE);
transfer_size=fread(string_base,(size_t)(1),(size_t)(READ_SIZE),handle);
printf("transfer_size=0x%08X\n",transfer_size);
fclose(handle);
}else{
printf("Please create a random test.bin of size > 2^26 first.\n");
}
return 0;
}
----------------------------------------------------------------------
>Comment By: Veiokej (veiokej)
Date: 2012-03-24 21:50
Sorry Earnie, I don't follow you. The point is that if you increase
READ_SIZE to or beyond 0x3FF8000, then you fail to read as many bytes as
you asked for. It might not be a spec violation, but it just seems bizarre
to have this weird number as a limit on data transfer size.
----------------------------------------------------------------------
Comment By: Earnie Boyd (earnie)
Date: 2012-03-12 05:14
READ_SIZE also contains a terminating NULL character so the number of items
read will always be READ_SIZE - 1 if there are more characters than
READ_SIZE - 1.
----------------------------------------------------------------------
You can respond by visiting:
Support Requests item #3502387, was opened at 2012-03-11 23:49
Message generated for change (Comment added) made by veiokej
You can respond by visiting:
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: MinGW Installer
Group: None
Status: Open
Priority: 5
Private: No
Submitted By: Veiokej (veiokej)
Assigned to: Charles Wilson (cwilso11)
Summary: MinGW Installer ignores preinstalled packages
Initial Comment:
OK, I might be wrong here, but it certainly looks like this is what's happening.
Every time I run the MinGW installer, it asks if I want to use the prepackaged binaries, or the latest ones online. I always choose prepackaged. But then it goes into this ghetto command line interface, and then takes forever to fetch packages from online (an hour, at one point), probably due to inefficient FTP or something. It's painful. If I want to upgrade, I'll just download the whole tarball again. Do we really have to go through this every time?
----------------------------------------------------------------------
>Comment By: Veiokej (veiokej)
Date: 2012-03-24 21:47
Sorry for the delayed reply.
Anyway, "mingw-get update" is a lot better. But you have to admit, it's
intuitive to rerun the install in order to upgrade. I had no obvious reason
to search the documentation for this command.
SourceForge mirrors aren't the problem, so much as the Internet roundtrip
count required in order to get the code. It's the algorithm used in the
"first time" GUI that's the problem. I wish it would just efficiently
download one long streaming file. Not that I expect that to ever change,
but it would be nice if it did.
Anyway thanks guys for the useful recommendations.
----------------------------------------------------------------------
Comment By: Keith Marshall (keithmarshall)
Date: 2012-03-12 07:51
Every time? It seems that your expectations are flawed. I don't see any
bug here; redesignating as a support request.
By your reference to "ghetto command line interface" I deduce that by
"MinGW installer" you probably mean mingw-get-inst.exe, which is NOT the
MinGW installer; it is a GUI wrapper to facilitate your FIRST TIME run of
mingw-get.exe, the installer proper, which, just like the tools it installs
is a command line application.
I don't understand your reference to pre-packaged binaries; it is only
repository catalogues which are prepackaged, and that option is usually a
poor choice, since those prepackaged catalogues quickly become obsolete.
There are no prepackaged binaries, other than those which are subsequently
fetched when you run mingw-get.exe itself, (or when mingw-get-inst.exe runs
it for you); you have installed NOTHING, until mingw-get.exe has been run,
(and yes, this MUST run in command line mode).
Note my emphasis on FIRST TIME run: there is no useful SECOND TIME facility
from mingw-get-inst.exe. Once you've completed that first time run, you
keep your installation current by running mingw-get.exe directly from the
command line, as described in mingw-get-inst.exe's own release notes,
(visible as a README on the download page):
mingw-get update
mingw-get upgrade
So, not only can you use the command line version easily, as Earnie
suggests, you actually SHOULD use it; you should NOT be running
mingw-get-inst.exe more than once, to kick-start your initial
installation.
Finally, I'm not convinced that logging in to SF, and designating a default
mirror is actually helpful when faced with flaky SF download issues. In
any case, we can't help you to resolve such issues; you need to take them
up with SF directly.
----------------------------------------------------------------------
Comment By: Earnie Boyd (earnie)
Date: 2012-03-12 05:03
You can use the command line version easily.
mingw-get update
mingw-get upgrade foo
This will access the net for the update to pull down all of the xml schema
files for the update command and then access the net to pull down the
upgrade for package foo and perhaps its dependencies. There are currently
107 schema files between MinGW and MSYS. The GUI is just a wrapper for the
command line interface for those who insist on a GUI. A GUI is in the
plans for mingw-get as time permits. I'm thinking though that the
prepackaged is in reference to the schema files and not the binaries
themselves. To include the the binaries in the wrapper would require a
sizable file to download and wouldn't be prudent.
We can't control the mirror selection from SF but I'm guessing your hour is
due to you and SF. To help control this you could assign a default mirror
in the SF account interface, login to SF before doing the mingw-get update.
----------------------------------------------------------------------
You can respond by visiting: | http://sourceforge.net/p/mingw/mailman/mingw-notify/?viewmonth=201203&viewday=25 | CC-MAIN-2014-23 | refinedweb | 1,287 | 62.78 |
I am beginner to Android. I want to transfer a image from one activity to another using Intent. In the first activity a user must select the image from many images from a scroll view and after that a image must be displayed in a imageview of very next activity. How can I achieve this with the help of Intent ?
I have understood your problem. Please go through below solution on it.
The easiest way to achieve this is with the use of intent.
First you must create the static variable in one of the class as below :
static
public class MyClass{
public static Bitmap MYPHOTO = null;
}
After that you should get the bitmap from a gallery just to save that bitmap in your MYPHOTO variable.
If you are willing to get the photo from your camera then code is as below.
Intent myintent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(myintent, CAMERA_PIC_REQUEST);
After this you should write below code:
@Override
public void onActivityResult(int myrequestCode, int myresultCode, Intent mydata) {
if (myresultCode == Activity.RESULT_OK) {
switch (myrequestCode) {
case CAMERA_PIC_REQUEST:
Bitmap myb = (Bitmap) mydata.getExtras().get("mydata");
if (myb != null) {
MyClass.MYPHOTO = myb;
}
break;
}
}
Now you can use your MYPHOTO variable in any other Activity.
You can use the same way if you want to pick photo from the gallery. | https://kodlogs.com/36413/how-to-pass-image-from-one-activity-to-another-in-android-using-intent | CC-MAIN-2021-21 | refinedweb | 218 | 57.47 |
The information in this post is out of date.
Visit msdn.com/data/ef for the latest information on current and past releases of EF.
For Code First Migrations see have Alpha 3 installed you can use the ‘Update-Package EntityFramework.Migrations’ command in Package Manager Console to upgrade to Beta 1. You will need to close and re-open Visual Studio after updating, this is required to reload the updated command assemblies.
You will also need to update any existing code to reflect a series of class and method renames:
- The Settings class has been renamed to Configuration. When you update the NuGet package you will get a new Configuration.cs (or Configuration.vb) file added to your project. You will need to remove the old Settings file. If you added any logic for seed data etc. you will need to copy this over to the new Configuration class before removing Settings.
(This file rename is a result of us changing the base class for this class from DbMigrationContext to DbMigrationsConfiguration)
- If you have existing migrations that call ChangeColumn you will need to update them to call AlterColumn instead.
- There is a designer code file associated with each migration in your project, for migrations generated with Alpha 3 you will need to edit this file. You will need to add a using statement for System.Data.Entity.Migrations.Infrastructure and change the references to IDbMigrationMetadata to IMigrationMetadata.
RTM as EF4.3
So far we have been shipping Migrations as a separate EntityFramework.Migrations NuGet package that adds on to the EntityFramework package. As our team has been looking at the grow-up story to Migrations from a Code First database that was created by just running a Code First application it’s becoming clear that Migrations is a very integral part of Code First. We’ve also heard feedback that we need to reduce the number of separate components that make up EF. Because of this we are planning to roll the migrations work into the EntityFramework NuGet package so that you get everything you need for Code First applications in a single package. This will be the EF4.3 release.
The timeline to our first RTM of Code First Migrations depends on the feedback we get on this release but we are currently aiming to have a go-live release available in early 2012. We’re planning to make a Release Candidate available before we publish the final RTM.
What’s Still Coming
Beta 1 only includes the Visual Studio integrated experience for Code First Migrations. We also plan to deliver a command line tool and an MSDeploy provider for running Code First Migrations.
We are planning to include the command line tool as part of the upcoming RTM.
We’re working with the MSDeploy team to get some changes into the next release of MSDeploy to support our new provider. Our MSDeploy provider will be available when the next version of MSDeploy is published. This will be after our initial RTM of Code First Migrations.
Getting Started
There are two walkthroughs for Beta 1. One focuses on the no-magic workflow that uses a code-based migration for every change. The other looks at using automatic migrations to avoid having lots of code in you project for simple changes.
- Code First Migrations: Beta 1 ‘No-Magic’ Walkthrough
- Code First Migrations: Beta 1 ‘With-Magic’ Walkthrough (Automatic Migrations)
Code First Migrations Beta 1 is available via NuGet as the EntityFramework.Migrations package.
I just came here to say, apart from the usefulness of Code First Migrations I have to congratulate you on the good job you guys are doing at everything else, i.e., managing this blog with incredibly detailed info, possible release dates, answering comments, walkthroughs side-by-side new releases, taking in feedback and acting on it, etc.
It makes us feel almost like part of the team. Thank you very much.
I will try this new version. I can´t use Alpha 3: social.msdn.microsoft.com/…/b89279bd-666b-4489-97f6-684b7667f3ab
Congratulations team!
EF Code First is perfect and Code First Migrations will improve this perfection!
Congratulations!
Your work is perfect! I am very proud of work with this feature!
Thank you for your job and dedication!
I am struggling to do a Zero to one Relationship to a child object. I have tried TPH, TPT and even EDMX.
Please take a look at below two sites where I have posted request for help, I am sure this is possible but maybe EF is not a very popular ORM hence I am not getting much response.
stackoverflow.com/…/entity-framework-inheritance-zero-to-one-relationship-to-child-object-how-t
social.msdn.microsoft.com/…/69d9145a-f7ec-4845-b01a-26e57e9ba16e
How do we configure for SqlCE support?
How do you handle scenarios where you need to update existing data in the database with code rather than SQL? I posted this before, but say you want to use a C# function to populate the Blog.Abstract property in a way that can't easily be done with a SQL update statement (for example parse the blog contents and truncate at the end of a sentence). We need a way to select data from the database in the migration, do some operations on it in code, and then put that modified data back in to the database. We've been using migrations for years and that's one of our most common scenarios (we've had to build custom code to work around it when using FluentMigrator).
All the communication, voting, and early access versions are a breath of fresh air. Keep it up!
I have the EF nuget package installed at my class library and at asp.net MVC project
If you put migration together EF, I will end with migration references and config files in both projects
@Andrew Peters – So is cascadeDelete : true by default now? I am a little confused on how the scaffolding handles this with regards to the fluent api of EF configurations. Could you please elaborate?
Regards,
Sam Striano
Please change the default naming convention for foreign keys to be be in line with SQL Server Management Studio designers. The current naming convention for CF Migrations is as follows:
FK_{child table name}_{parent table name}_{column name}
The designers in SSMS create FK constraint names with this convention:
FK_{child table name}_{parent table name}
"Migrations will now create indexes on foreign key columns."
I generated the script using Update-Database, and I did not see any CREATE INDEX statements for foreign keys. Is there some kind of property setting for this?
I am getting an error when using SqlCE. Using the '- verbose' flag I can see that all of the table and index relaerd scripts run, but then the methods under the section [Inserting migration history record] I get the error. The full details are located in the pre-release forum under the titled post "Migrations Beta1 – Error when calling 'update-database' when using SqlCe" created on Thursday, 12/1/2011.
I'm in CST, so the date for my post in PST is Wednesday, 11/30/2011.
Got things work using Sql Express and a .mdf file. This is -really- nice! 🙂
@Sam Striano,
By convention, Code First will enable cascade delete for 1..* associations. Perhaps this is what you are seeing?
Cheers,
Andrew.
@Felipe Fujiy – I’ve followed up on the forum thread you linked to
@ Lynn Eriksen – You will need to have SQL CE 4.0 installed and then add the following line to the constructor in your Configuration.cs class:
SetSqlGenerator("System.Data.SqlServerCe.3.5", new System.Data.Entity.Migrations.Sql.SqlCeMigrationSqlGenerator());
@shawn – We have an item on our backlog to look at this but we weren’t planning to put it in v1. We’ve had a few folks ask about it though so we’ll reconsider it for v1. Obviously –script would fail if you had arbitrary code included in a migration.
@Felipe Fujiy – The migrations code will roll into the same assembly so you would still just have one reference in your project. We are planning to add an Enable-Migrations command that would add the Migrations folder so you wouldn’t get it in every project you install EF to.
@zl1 – We need to add the column names on the end to ensure the name is unique if you have multiple FKs between the same tables. You can supply your own name via the API in the migration code file.
@zl1 – It is enabled by default. If you generated code migrations with Alpha 3 they won’t have the create index calls, but you can add them in. If you generate new code migrations with Beta 1, or perform automatic migrations, then indexes should be created on any columns that are foreign keys. If you have a situation where this isn’t happening then please start up a thread in the pre-release forum (link is in the blog post). Include your model classes etc. and we’ll work out what is going wrong.
@lynn eriksen – We’ll take a look at the issues you were having with CE, glad that you are up and running with SQL Express.
I posted a question on social.msdn.microsoft.com/…/c81dfd60-ede5-4873-9f4d-04e240c293c2. I would like guidance on how you could handle different database providers (MSSQL, Oracle) and execute database provider specific sql.
What if I want complete control over DB schema, willing to create it manually. But from time to time I need to check if database model is in sync with EF models. I would appreciate having a tool allowing me to run that type of comparison. I would use it instead of current version of "EntityFramework.Migrations".
Can you please describe a workflow for working with EntityFramework.Migrations in a team and its project hosted under source control system?
..for example one developer updated the model, created new migration file and checked in all changes including migration file under source control system. Then another developer pulled recent version into his branch, but he needs to merge it before continuing working on his own branch of the project.. won't there be problems during such merge operations?
Is the fluent API still supported with Code First Migrations? I don't see my mapping classes being called when Update-Database is called. Also, do you have to shut down visual studio and reload in order for Update-Database to pick up any changes to your class model after having called it previously? That seems to be the behaviour.
@zl1
Actually, I see a benefit to having the column name included in the name of the foreign key. For example, I have some tables where there are multiple foreign keys linking to the same table. SQL Management Studio just appends a digit to the end of the foreign keys. This makes it difficult to tell which foreign key corresponds to which navigation property if you do something like database first (using non-code first). So, it's useful to include the column name in an example like the following. A grant has a local investigator and a principal investigator. Both of these foreign keys map to the Person table. So, if it does include the column name like you say, I see that as a good thing and better than what SSMS does by default.
Grant
LocalInvestigatorId
PrincipalInvestigatorId
Person
Id
I.e. I would rather have
FK_Grant_Person_LocalInvestigatorId
FK_Grant_Person_PrincipalInvestigatorId
FK_Grant_Person
FK_Grant_Person1
Here is my alternative to EntityFramework.Migrations package; instead of maintaining model change history it physically compares database schema with EF Code-First model during database initialization or/and manually by request.
If you develop both model and db schema by hand, it allows to find quickly all the differences / incompatibilities between the db schema and the model.
github.com/…/data – It's an alpha version. Feedback is always welcome!
Is it possible to trigger a database update programmatically from within the application at runtime? Otherwise I'm not sure how to use this while automatically deploying/updating my application on hundreds of customers' servers?
@Kristoffer,
Yes, check out the DbMigrator class.
I am having a problem with Model-first programing with many-to-many relations.
Entity framework keeps changing the names of the join fields back to RelationName_FieldName, even after I try to set the field name to just FieldName.
Bug report is here:
connect.microsoft.com/…/entity-framework-chaning-the-column-names-of-an-association-table
This is a real headache, because we want to do model-first programming, so that we can add custom attributes to our entities.
– pmcilroy@ms
First of all I think Migrations is one of the last functionalities I missed in the CTP, so great job done. However I have one tiny problem.
I switched over to Migrations and therefore had to include a reference to EntityFramework instead of the System.Data.Entity.
Now I'm having problems with changing the state of a entity with the enumertor EntityState. It appears not to be included in EntityFramework and using Migrations I can not use the System.Data.EntityState anymore.
What is the replacement or workaround for this?
@Capellio: you will need references to both EntityFramework.dll and System.Data.Entity.dll as some parts of the API, like the EntityState enum, are defined in the later. Hope this helps.
Would it be possible to have CF Migrations read annotations in order to determine the [DiscriminatorColumn] in SINGLE_TABLE / Table per Hierarchy Inheritance? Then the public classes that extend the abstract super class could use a [DiscriminatorValue]. This is how a lot of legacy sql tables work in the real world
Hi, I'm having problems with migrations since the Model value for the _MigrationHistory table is about 8800 chars.
Should I manually change the dimension of this field?
Just wondering when or if a second level cache will be added to the Framework. It would be nice if his was shipped with EF
The Atlanta Falcons have made franchise history by clinching asecond straight playoff berth, but if they don't make improvementson both sides of the ball, the team could be in line for anotherquick postseason exit.Atlanta tries to reach double-digit wins in a second consecutiveyear for the first time Sunday when it hosts a Tampa Bay Buccaneersteam looking to avoid closing the season on a 10-game losingstreak.With Chicago losing at Green Bay the previous night, the Falcons(9-6) clinched a wild-card spot before they took the field Mondayin New Orleans.
Any chance you could share what you are doing to get the MSDeploy provider to work? I hit an issue working on a provider that required to be able to access CLR 4 assemblies.
@Colin Bowern: We've hit a few limitations while implementing our Web Deploy provider. We've raised these to the Web Deploy team.
Having said that, a couple of ways to get .NET 4 code running from a provider are:
1. Configure Web Deploy to run as a .NET 4 process. This may not, however, be possible for all scenarios.
2. Launch a separate .NET 4 process from your provider.
For additional Web Deploy support, I'd try <a href="forums.iis.net/1144.aspx">their forums</a>.
Oops, I mean forums.iis.net/1144.aspx | https://blogs.msdn.microsoft.com/adonet/2011/11/29/code-first-migrations-beta-1-released/ | CC-MAIN-2019-04 | refinedweb | 2,576 | 56.05 |
Introduction
In this article we are going to explore how to create a custom TextBox with default Text in Windows Phone 7. Whenever we click on the TextBox it will clear out it's default text and whenever we don't have any value in the TextBox the default text will be shown again by it's default property. It shows the Default text written inside the TextBox. In this we will use a Windows Phone class library in which we will declare the default text property. In the Windows Phone library project we have to inherit a class named TextBox so because of that we will make a custom TextBox. We all know why we used the custom control because, if we want to make our own control according to the user requirement and will do the easy task of the user. Further we will see how it is possible to make such type of functionality to make it a custom control. To implement it you will have to use the steps given below.
Step 1: In this step we have to create a Windows Phone library so let's see from where you have to add it which is given in the figure.
Step 2: In this step we will see the code for the DefaultTxt_box.cs file which is given below.
Code:
using System;
using System.Net;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Ink;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;namespace defaulttextbox
{
public class DefaultTxt_box:TextBox
{
private string My_Default_Text = string.Empty;
public string D_Text
get
{
return My_Default_Text;
}
set
My_Default_Text = value;
MyDefault_Text(); }
} public DefaultTxt_box()
this.GotFocus += (sender, e) =>
{ if (this.Text.Equals(D_Text)) { this.Text = string.Empty; } };
this.LostFocus += (sender, e) => { MyDefault_Text(); };
} private void MyDefault_Text()
if (this.Text.Trim().Length == 0)
this.Text = D_Text;
}
}}
In this code given above here we are declaring a property named D_Text which is used to place the default string inside the TextBox; it's the property by which we can set the default text to the TextBox.
Step 3: In this step you have to build the class library for which you have to build the project; it will be built successfully as well.
Step 4: Now you have to take another application which is a Windows Phone application.
Step 5: In this step you will have to add the reference of the phone class library which was previously created; let us see from where you have to add this which is shown in the figure.
Step 6: In this step we have to add the complete phone library project to the Windows Phone application project; let us see how you will add it.
Click on Add and select the Existing Project which is shown in the figure given below.
Add the Existing Project with the Project file which is given below.
Step 7: In this step you will see that the project has been added to your Windows Phone application project which you can see in the figure given below.
Step 8: In this step you just see that the control named with Validation Control has been added to the toolbox which you can see in the figure given below; you can drag and drop that control to use it.
Step 9: In this step you will see the code for the MainPage.xaml file which is given below.
<phone:PhoneApplicationPage
x:
<!- Default TextBox" Margin="9,-7,0,0" Style="{StaticResource PhoneTextTitle1Style}"
FontFamily="Comic Sans MS" FontSize="48">
<TextBlock.Foreground>
<LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
<GradientStop Color="Black" Offset="0" />
<GradientStop Color="#FF1FE7E9" Offset="1" />
</LinearGradientBrush>
</TextBlock.Foreground>
</TextBlock>
</StackPanel>
<!--ContentPanel - place additional content here-->
<Grid x:
<Grid.Background>
<GradientStop Color="Black" Offset="0" />
<GradientStop Color="#E1B4CEC0" Offset="1" />
</Grid.Background>
<my:DefaultTxt_box
<my:DefaultTxt_box.Background>
<LinearGradientBrush EndPoint="1,0.5" StartPoint="0,0.5">
<GradientStop Color="Black" Offset="0" />
<GradientStop Color="#FF90CFCF" Offset="1" />
</LinearGradientBrush>
</my:DefaultTxt_box.Background>
</my:DefaultTxt_box>
</Grid>
</Grid>
</phone:PhoneApplicationPage>
Step 10: In this step you have to drag and drop the Default_Txt_box control from the toolbox on the MainPage.xaml file and the design of that page is given below.
Step 11: Now further we are going to run the application then the output regarding our application is given below.
Output 1: In this output which is the default output we will see that the default text for the textbox is shown inside the TextBox which is "Enter Your Name" which is given below.
Output 2: After clicking on the TextBox the default text shown inside the TextBox will be cleared as well. You can see it via the figure given below.
Output 3: In this output whenever you clear the TextBox then again it will see that default value which is shown in the figure given below.
Here are the other resources which may help you.
Create a Watermark TextBox Effect from Windows Phone 7 Create a Simple NumericTextBox Control in Windows Phone 7Create a Custom Email Validator Control in Windows Phone 7
Image Based Text Effect in Windows Phone 7
Create a Custom Multi-Column StackPanel Control in Windows Phone 7
©2017
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/74f20d/create-a-custom-textbox-with-default-text-in-windows-phone-7/ | CC-MAIN-2017-04 | refinedweb | 881 | 65.22 |
This post is the second of a series; click here for the previous post.
Naming and Scoping
Naming Variables and Tensors
As we discussed in Part 1, every time you call
tf.get_variable(), you need to assign the variable a new, unique name. Actually, it goes deeper than that: every tensor in the graph gets a unique name too. The name can be accessed explicitly with the
.name property of tensors, operations, and variables. For the vast majority of cases, the name will be created automatically for you; for example, a constant node will have the name
Const, and as you create more of them, they will become
Const_1,
Const_2, etc.1 You can also explicitly set the name of a node via the
name= property, and the enumerative suffix will still be added automatically:
Code:
import tensorflow as tf a = tf.constant(0.) b = tf.constant(1.) c = tf.constant(2., name="cool_const") d = tf.constant(3., name="cool_const") print a.name, b.name, c.name, d.name
Output
Const:0 Const_1:0 cool_const:0 cool_const_1:0
Explicitly naming nodes is nonessential, but can be very useful when debugging. Oftentimes, when your Tensorflow code crashes, the error trace will refer to a specific operation. If you have many operations of the same type, it can be tough to figure out which one is problematic. By explicitly naming each of your nodes, you can get much more informative error traces, and identify the issue more quickly.
Using Scopes
As your graph gets more complex, it becomes difficult to name everything by hand. Tensorflow provides the
tf.variable_scope object, which makes it easier to organize your graphs by subdividing them into smaller chunks. By simply wrapping a segment of your graph creation code in a
with tf.variable_scope(scope_name): statement, all nodes created will have their names automatically prefixed with the
scope_name string. Additionally, these scopes stack; creating a scope within another will simply chain the prefixes together, delimited by a forward-slash.
Code:
import tensorflow as tf a = tf.constant(0.) b = tf.constant(1.) with tf.variable_scope("first_scope"): c = a + b d = tf.constant(2., name="cool_const") coef1 = tf.get_variable("coef", [], initializer=tf.constant_initializer(2.)) with tf.variable_scope("second_scope"): e = coef1 * d coef2 = tf.get_variable("coef", [], initializer=tf.constant_initializer(3.)) f = tf.constant(1.) g = coef2 * f print a.name, b.name print c.name, d.name print e.name, f.name, g.name print coef1.name print coef2.name
Output
Const:0 Const_1:0 first_scope/add:0 first_scope/cool_const:0 first_scope/second_scope/mul:0 first_scope/second_scope/Const:0 first_scope/second_scope/mul_1:0 first_scope/coef:0 first_scope/second_scope/coef:0
Notice that we were able to create two variables with the same name -
coef - without any issues! This is because the scoping transformed the names into
first_scope/coef:0 and
first_scope/second_scope/coef:0, which are distinct.
Saving and Loading
At its core, a trained neural network consists of two essential components:
- The weights of the network, which have been learned to optimize for some task
- The network graph, which specifies how to actually use the weights to get results
Tensorflow separates these two components, but it’s clear that they need to be very tightly paired. Weights are useless without a graph structure describing how to use them, and a graph with random weights is no good either. In fact, even something as small as swapping two weight matrices is likely to totally break your model. This often leads to frustration among beginner Tensorflow users; using a pre-trained model as a component of a neural network is a great way to speed up training, but can break things in a myriad of ways.
Saving A Model
When working with only a single model, Tensorflow’s built-in tools for saving and loading are straightforward to use: simply create a
tf.train.Saver().
Similarly to the
tf.train.Optimizer family, a
tf.train.Saver is not itself a node, but instead a higher-level class that performs useful functions on top of pre-existing graphs.
And, as you may have anticipated, the ‘useful function’ of a
tf.train.Saver is saving and loading the model.
Let’s see it in action!
Code:
import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model')
Output
Four new files:
checkpoint tftcp.model.data-00000-of-00001 tftcp.model.index tftcp.model.meta
There’s a lot of stuff to break down here.
First of all: Why does it output four files, when we only saved one model? The information needed to recreate the model is divided among them. If you want to copy or back up a model, make sure you bring all three of the files (the three prefixed by your filename). Here’s a quick description of each:
tftcp.model.data-00000-of-00001contains the weights of your model (the first bullet point from above). It’s most likely the largest file here.
tftcp.model.metais the network structure of your model (the second bullet point from above). It contains all the information needed to re-create your graph.
tftcp.model.indexis an indexing structure linking the first two things. It says “where in the data file do I find the parameters corresponding to this node?”
checkpointis not actually needed to reconstruct your model, but if you save multiple versions of your model throughout a training run, it keeps track of everything.
Secondly, why did I go through all the trouble of creating a
tf.Session and
tf.global_variables_initializer for this example?
Well, if we’re going to save a model, we need to have something to save.
Recall that computations live in the graph, but values live in the session.
The
tf.train.Saver can access the structure of the network through a global pointer to the graph.
But when we go to save the values of the variables (i.e. the weights of the network), we need to access a
tf.Session to see what those values are; that’s why
sess is passed in as the first argument of the
save function.
Additionally, attempting to save uninitialized variables will throw an error, because attempting to access the value of an uninitialized variable always throws an error.
So, we needed both a session and an initializer (or equivalent, e.g.
tf.assign).
Now that we’ve saved our model, let’s load it back in.
The first step is to recreate the variables: we want variables with all the same names, shapes, and dtypes as we had when we saved it.
The second step is to create a
tf.train.Saver just as before, and call the
restore function.
Code:
import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) saver = tf.train.Saver() sess = tf.Session() saver.restore(sess, './tftcp.model') sess.run([a,b])
Output
[1.3106428, 0.6413864]
Note that we didn’t need to initialize
a or
b before running them!
This is because the
restore operation moves the values from our files into the session’s variables.
Since the session no longer contains any null-valued variables, initialization is no longer needed.
(This can backfire if we aren’t careful: running an init after a restore will override the loaded values with randomly-initialized ones.)
Choosing Your Variables
When a
tf.train.Saver is initialized, it looks at the current graph and gets the list of variables; this is permanently stored as the list of variables that that saver “cares about”.
We can inspect it with the
._var_list property:
Code:
import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) saver = tf.train.Saver() c = tf.get_variable('c', []) print saver._var_list
Output
[<tf.Variable 'a:0' shape=() dtype=float32_ref>, <tf.Variable 'b:0' shape=() dtype=float32_ref>]
Since
c wasn’t around at the time of our saver’s creation, it does not get to be a part of the fun.
So in general, make sure that you already have all your variables created before creating a saver.
Of course, there are also some specific circumstances where you may actually want to only save a subset of your variables!
tf.train.Saver lets you pass the
var_list when you create it to specify which subset of available variables you want it to keep track of.
Code:
import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) c = tf.get_variable('c', []) saver = tf.train.Saver(var_list=[a,b]) print saver._var_list
Output
[<tf.Variable 'a:0' shape=() dtype=float32_ref>, <tf.Variable 'b:0' shape=() dtype=float32_ref>]
The examples above cover the ‘perfect sphere in frictionless vacuum’ scenario of model-loading. As long as you are saving and loading your own models, using your own code, without changing things in between, saving and loading is a breeze. But in many cases, things are not so clean. And in those cases, we need to get a little fancier.
Let’s take a look at a couple of scenarios to illustrate the issues. First, something that works without a problem. What if we want to save a whole model, but we only want to load part of it? (In the following code example, I run the two scripts in order.)
Code:
import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model')
import tensorflow as tf a = tf.get_variable('a', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.restore(sess, './tftcp.model') sess.run(a)
Output
1.1700551
Good, easy enough! And yet, a failure case emerges when we have the reverse scenario: we want to load one model as a component of a larger model.
Code:
import tensorflow as tf a = tf.get_variable('a', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model')
import tensorflow as tf a = tf.get_variable('a', []) d = tf.get_variable('d', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.restore(sess, './tftcp.model')
Output
Key d not found in checkpoint [[ = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
We just wanted to load
a, while ignoring the new variable
d. And yet, we got an error, complaining that
d was not present in the checkpoint!
A third scenario is where you want to load one model’s parameters into a different model’s computation graph. This throws an error too, for obvious reasons: Tensorflow cannot possibly know where to put all those parameters you just loaded. Luckily, there’s a way to give it a hint.
Remember
var_list from one section-header ago?
Well, it turns out to be a bit of a misnomer.
A better name might be “var_list_or_dictionary_mapping_names_to_vars”, but that’s a mouthful, so I can sort of see why they stuck with the first bit.
Saving models is one of the key reasons that Tensorflow mandates globally-unique variable names. In a saved-model-file, each saved variable’s name is associated with its shape and value. Loading it into a new computational graph is as easy as mapping the original-names of the variables you want to load to variables in your current model. Here’s an example:
Code:
import tensorflow as tf a = tf.get_variable('a', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model')
import tensorflow as tf d = tf.get_variable('d', []) init = tf.global_variables_initializer() saver = tf.train.Saver(var_list={'a': d}) sess = tf.Session() sess.run(init) saver.restore(sess, './tftcp.model') sess.run(d)
Output
-0.9303965
This is the key mechanism by which you can combine models that do not have the exact same computational graph. For example, perhaps you got a pre-trained language model off of the internet, and want to re-use the word embeddings. Or, perhaps you changed the parameterization of your model in between training runs, and you want this new version to pick up where the old one left off; you don’t want to have to re-train the whole thing from scratch. In both of these cases, you would simply need to hand-make a dictionary mapping from the old variable names to the new variables.
A word of caution: it’s very important to know exactly how the parameters you are loading are meant to be used. If possible, you should use the exact code the original authors used to build their model, to ensure that that component of your computational graph is identical to how it looked during training. If you need to re-implement, keep in mind that basically any change, no matter how minor, is likely to severely damage the performance of your pre-trained net. Always benchmark your reimplementation against the original!
Inspecting Models
If the model you want to load came from the internet - or from yourself, >2 months ago - there’s a good chance you won’t know how the original variables were named. To inspect saved models, use these tools, which come from the official Tensorflow repository. For example:
Code:
import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', [10,20]) c = tf.get_variable('c', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model') print tf.contrib.framework.list_variables('./tftcp.model')
Output
[('a', []), ('b', [10, 20]), ('c', [])]
With a little effort and a lot of head-scratching, it’s usually possible to use these tools (in conjunction with the original codebase) to find the names of the variables you want.
Conclusion
Hopefully this post helped clear up the basics behind saving and loading Tensorflow models. There are a few other advanced tricks, like automatic checkpointing and saving/restoring meta-graphs, that I may touch on in a future post; but in my experience, those use-cases are rare, especially for beginners. As always, please let me know in the comments or via email if I got anything wrong, or there is anything important I missed. Thanks for reading!
There will also be a suffix
:output_numadded to the tensor names. For now, that’s always
:0, since we are only using operations with a single output. See this StackOverflow question for more info. Thanks Su Tang for pointing this out! ↩ | https://jacobbuckman.com/2018-09-17-tensorflow-the-confusing-parts-2/ | CC-MAIN-2022-21 | refinedweb | 2,434 | 59.7 |
.3 released
==============================
QF-Test version 4.0.3 is now available for download from
Besides bug fixes this update contains several important new features
for all GUI engines:
- Apart from updates for Firefox 35 and 36 the AJAX resolver for GWT
has been rewritten from scratch. It supports GWT version 2.7 and a
significantly wider range of components.
- The new Eclipse/SWT version 4.5 "Mars" is now supported for the
first time.
- Integration of Java FX and Swing is now also supported for the
variant where Swing is embedded into Java FX and also for the
special case where both share the same event dispatch thread.
The current Java updates 8u31 and 7u75/7u76 have also been taken into
account.
Release Notes for QF-Test version 4.0.3
=======================================
New features:
--------------
* Support for Firefox has been updated to match the public release of
Firefox version 35 and support for Firefox version 36 has been
added.
* The resolver for the AJAX framework GWT has been rewritten from
scratch and supports a far wider range of GWT components based on
GWT version 2.7.
* Support for Eclipse/SWT 4.5 "Mars" has been added based on Eclipse
version 4.5M4.
* Recording of Drag&Drop is now implemented for Java FX also.
* In addition to Java FX components embedded in Swing, hybrid
applications that work the other way round and embed Swing
components in Java FX are now also supported. This includes support
for sharing the same event dispatch thread between Java FX and
Swing.
* The new operation "Find/Remove unused callables" has been introduced
in order to get rid of procedures which are never used in your test
suites.
* The new procedure qfs.check.compareDates in the standard library
qfs.qft can be used to compare two date strings in arbitrary format.
* The procedure qfs.utils.xml.compareXMLFiles has two new parameters
for stripping white space and for handling namespace definitions.
* The diagnostic information logged for every error and exception now
also includes a full thread dump of the SUT as well as the SUT's
Java memory usage and the current system load.
* When executing QF-Test in batch mode for report generation or other
commands that don't run any tests, QF-Test now runs in AWT headless
mode if the command line argument -nomessagewindow is specified.
This is useful when using such commands from a continuous
integration server that runs without a display.
* 'Server HTTP request' nodes can now use the syntax to send requests to secured servers.
* The default Java memory setting for QF-Test is now 512 MB and the
default Java memory for QF-Test's browser wrapper has been increased
to 256 MB.
* The new ResetListener extension API can be used to influence the
behavior of the command »Run«-»Reset everything«, e.g. to prevent a
certain client process from being killed or to perform additional
cleanup like deleting Jython or Groovy variables.
Bugs fixed:
-----------
* Changing the order of 'Catch' nodes under a 'Try' node was broken by
the addition of the optional 'Else' node in QF-Test version 4.0.2.
* Display of large amounts of output from the SUT in the terminal
could slow down QF-Test and the test execution if rich text
formatting was activated for the terminal.
* In very rare cases test execution in batch mode could hang on
Windows when the SUT ran into a deadlock.
* Depending on timing, QF-Test sometimes did not kill all process
clients when exiting.
* Importing test-suites could be very slow if tolerant class matching
was activated.
* If an Excel sheet contains too deeply nested functions, QF-Test will
now throw a TestException instead of ignoring those cells. Handling
such Excel files requires an increased thread stack size which can
be achieved by starting QF-Test with the command line argument
-J-Xss512k.
* When recording procedures, some placeholders in nested sequences in
the template were not replaced correctly.
* The procedure recorder is now able to create container procedures
with standard event nodes and sequences without relying on
component-specific procedures.
* If an HTTP exception was thrown from a 'Server HTTP request' node
due to to status code > 399, the variables 'responseDate' and
'statusCode' were not set correctly.
* The horizontal scroll bar of the internal script editor was not set
correctly when hard TAB characters were contained in the code.
* Waiting for the absence of a multi-level sub-item now works
correctly.
* For WebStart applications QF-Test now also automatically handles the
German version of a possible HTTPS certificate warning.
* HTML reports, testdoc and pkgdoc documents could get scrambled by
HTML comments split across several lines if HTML pass-through was
activated.
* The declaration and encoding of XML reports, testdoc and pkgdoc
documents were inconsistent if the default file encoding of
QF-Test's Java VM was not ISO-8859-1.
* The tool for the "QF-Test Java Configuration" could not save values
to the Windows registry if setup.exe was never run before.
* When recording in Swing applications with a great number of deeply
nested components, performance could suffer severely.
* In Swing applications running on Java 8, bringing up the menu for
check recording could subsequently block input into text fields.
* The workaround for focus issues in Java 8 on Windows when changing
the topmost state of a window has been improved to handle another
special case.
* For hybrid Java FX and Swing applications replaying an event on an
embedded component now correctly raises and activates the
surrounding window of the other toolkit which improves the
reliability of such tests.
* Replaying a file selection in Java FX for a save dialog now also
sets the ExtensionFilter to match the chosen file's extension.
* Trying to record a check directly on the header of an SWT Table
caused an exception.
* Third-party plugins and extensions were not initialized correctly
for Firefox 30 and above.
* Resolving list items now also works for <SELECT> nodes and generic
ComboBox elements that are located in another list.
* The resolver for the AJAX framework ZK has been updated to version
1.1.1 which fixes a few minor issues and improves handling of
MenuItems.
* Playback of semi-hard mouse events with modifiers like [Ctrl] has
been fixed.
* Checks for table cells in KTable components were not recorded
correctly.
-- | https://www.qfs.de/qf-test-mailingliste-archiv-2015/lc/2015-msg00006.html | CC-MAIN-2019-39 | refinedweb | 1,052 | 62.48 |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 0.10.0
-
-
- Labels:
Description
I have a script that creates a Parquet file and then writes it out to a BufferOutputStream and then into a BufferReader with the intention of passing it to a place that takes a file-like object to upload it somewhere else. But the other location relies on being able to seek to the end of the file to figure out how big the file is, e.g.
reader.seek(0, 2) size = reader.tell() reader.seek(0)
But when I do that the following exception is raised:
pyarrow/io.pxi:209: in pyarrow.lib.NativeFile.seek ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pyarrow.lib.ArrowIOError: position out of bounds
I compared it to casting to an io.BytesIO instead which works:
import io import pyarrow as pa def test_arrow_output_stream(): output = pa.BufferOutputStream() output.write(b'hello') reader = pa.BufferReader(output.getvalue()) reader.seek(0, 2) assert reader.tell() == 5 def test_python_io_stream(): output = pa.BufferOutputStream() output.write(b'hello') buffer = io.BytesIO(output.getvalue().to_pybytes()) reader = io.BufferedRandom(buffer) reader.seek(0, 2) assert reader.tell() == 5
Attachments
Issue Links
- links to
- | https://issues.apache.org/jira/browse/ARROW-3098 | CC-MAIN-2018-51 | refinedweb | 234 | 59.3 |
Wiki
SCons / DoxygenBuilder
The SCons wiki has moved to
Doxygen Builder
The Doxygen builder will generate doxygen docs, given a Doxyfile. It will scan the Doxyfile and determine what directories will be created and what sources are used to generate the docs. This frees you up from writing special code to manage clean up and regeneration of the docs.
N.B. It seems there was a bug in scons versions before
0.97.0d20070918 which prevented dependencies from working for this builder. See this email to the mailing list for details.
Russel Winder has started a Bazaar branch on Launchpad to try and acrete all the work found on and from this page into a single good tool. You may want to use this version rather than trying to replicate the various changes needed to reconcile all the variations reported on this page. If you find any errors or improvements please contribute them back via a pull request rather than posting code to this page. Thanks.
The above now appears to have moved to a Mercurial branch on Bitbucket:.
Please use the Bazaar Mercurial branch above and not any of the codes below, which are left here just to preserve the historical record.
Usage
site_scons/site_tools/doxygen. Then, in your SConstruct file:
# scons buildfile # the doxygen package file needs to be in toolpath env = Environment(tools = ["default", "doxygen"]) env.Doxygen("Doxyfile")
Historical instructions
The original instructions for use are given below.
Save the following script as file 'doxygen.py' and put its directory in the 'toolpath' list as shown in "Usage" below.
# vim: set et sw=3 tw=0 fo=awqorc ft=python: # # Astxx, the Asterisk C++ API and Utility Library. # Copyright (C) 2005, 2006 Matthew A. Nicholson # Copyright (C) 2006 Tim Blechmann # # import os import os.path import glob from fnmatch import fnmatch def DoxyfileParse(file_contents): """ Parse a Doxygen source file and return a dictionary of all the values. Values will be strings and lists of strings. """ data = {} import shlex lex = shlex.shlex(instream = file_contents, posix = True) lex.wordchars += "*+./-:" lex.whitespace = lex.whitespace.replace("\n", "") lex.escape = "" lineno = lex.lineno token = lex.get_token() key = token # the first token should be a key last_token = "" key_token = False next_key = False new_data = True def append_data(data, key, new_data, token): if new_data or len(data[key]) == 0: data[key].append(token) else: data[key][-1] += token while token: if token in ['\n']: if last_token not in ['\\']: key_token = True elif token in ['\\']: pass elif key_token: key = token key_token = False else: if token == "+=": if key not in data: data[key] = [] elif token == "=": data[key] = [] else: append_data( data, key, new_data, token ) new_data = True last_token = token token = lex.get_token() if last_token == '\\' and token != '\n': new_data = False append_data( data, key, new_data, '\\' ) # compress lists of len 1 into single strings for k, v in data.items(): if len(v) == 0: data.pop(k) # items in the following list will be kept as lists and not converted to strings if k in ["INPUT", "FILE_PATTERNS", "EXCLUDE_PATTERNS"]: continue if len(v) == 1: data[k] = v[0] return data def DoxySourceScan(node, env, path): """ Doxygen Doxyfile source scanner. This should scan the Doxygen file and add any files used to generate docs to the list of source files. """ default_file_patterns = [ '*.c', '*.cc', '*.cxx', '*.cpp', '*.c++', '*.java', '*.ii', '*.ixx', '*.ipp', '*.i++', '*.inl', '*.h', '*.hh ', '*.hxx', '*.hpp', '*.h++', '*.idl', '*.odl', '*.cs', '*.php', '*.php3', '*.inc', '*.m', '*.mm', '*.py', ] default_exclude_patterns = [ '*~', ] sources = [] data = DoxyfileParse(node.get_contents()) recursive = data.get("RECURSIVE") == "YES" file_patterns = data.get("FILE_PATTERNS", default_file_patterns) exclude_patterns = data.get("EXCLUDE_PATTERNS", default_exclude_patterns) for node in data.get("INPUT", []): if os.path.isfile(node): sources.append(node) elif os.path.isdir(node): if recursive: for root, dirs, files in os.walk(node): for f in files: filename = os.path.join(root, f) pattern_check = any(fnmatch(filename, y) for y in file_patterns) exclude_check = any(fnmatch(filename, y) for y in exclude_patterns) if pattern_check and not exclude_check: sources.append(filename) else: for pattern in file_patterns: sources.extend(glob.glob("/".join([node, pattern]))) sources = [env.File(path) for path in sources] return sources def DoxySourceScanCheck(node, env): """Check if we should scan this file""" return os.path.isfile(node.path) def DoxyEmitter(source, target, env): """Doxygen Doxyfile emitter""" # possible output formats and their default values and output locations output_formats = { "HTML": ("YES", "html"), "LATEX": ("YES", "latex"), "RTF": ("NO", "rtf"), "MAN": ("NO", "man"), "XML": ("NO", "xml"), } data = DoxyfileParse(source[0].get_contents()) targets = [] out_dir = data.get("OUTPUT_DIRECTORY", ".") # add our output locations for k, v in output_formats.items(): if data.get("GENERATE_" + k, v[0]) == "YES": targets.append(env.Dir( os.path.join(out_dir, data.get(k + "_OUTPUT", v[1]))) ) # don't clobber targets for node in targets: env.Precious(node) # set up cleaning stuff for node in targets: env.Clean(node, node) return (targets, source) def generate(env): """ Add builders and construction variables for the Doxygen tool. This is currently for Doxygen 1.4.6. """ doxyfile_scanner = env.Scanner( DoxySourceScan, "DoxySourceScan", scan_check = DoxySourceScanCheck, ) import SCons.Builder doxyfile_builder = SCons.Builder.Builder( action = "cd ${SOURCE.dir} && ${DOXYGEN} ${SOURCE.file}", emitter = DoxyEmitter, target_factory = env.fs.Entry, single_source = True, source_scanner = doxyfile_scanner, ) env.Append(BUILDERS = { 'Doxygen': doxyfile_builder, }) env.AppendUnique( DOXYGEN = 'doxygen', ) def exists(env): """ Make sure doxygen exists. """ return env.Detect("doxygen")
Note added by Robert Lupton, rhl@astro.princeton.edu
I had to make two changes to make this work.
- I had to double the $ in the Action:
- action = env.Action("cd $${SOURCE.dir} && $${DOXYGEN} $${SOURCE.file}"), 2. As written, the Builder runs from the top level directory TOP when it scans the doxyfile, but runs doxygen from the source directory. This means that it you set INPUT to e.g. "..", the scanner will set the dependencies to refer to all files found by searching TOP/.. --- which isn't what you want!
Here's a fix (around line 122):
# # We're running in the top-level directory, but the doxygen # configuration file is in the same directory as node; this means # that relative pathnames in node must be adjusted before they can # go onto the sources list # conf_dir = os.path.dirname(str(node)) for node in data.get("INPUT", []): if not os.path.isabs(node): node = os.path.join(conf_dir, node)
Note added by SK
The code above originally had the following initialization of the
action = argument when creating the Builder:
action = env.Action("cd ${SOURCE.dir} && ${DOXYGEN} ${SOURCE.file}"),
The
env.Action() call explicitly asks for the string to be evaluated at call time, when the action is created, which is why Robert found it necessary to double the
$ characters. (It probably did work in earlier versions, but variable substitution in construction environment methods has been "cleaned up" in some recent versions, and this may have been a casualty.)
Since there's nothing special about the action being created (no
strfunction, for example), it's much simpler to just pass the command-line string to the Builder and let SCons create the Action object.
action = "cd ${SOURCE.dir} && ${DOXYGEN} ${SOURCE.file}",
I updated the code above so that people who cut and paste without reading all the way to the bottom of the page shouldn't have this problem.
Additional update 6 March 2007: There was also a left-over
env.Builder that had to be changed to the raw form of the call to avoid variable expansion earlier than we want. Code above changed.
Note added by Dirk Reiners, dirk@louisiana.edu
I added two (at least for me ;)) important features of doxygen: variable substituion and hierarchical doxygen files.
Variable substituion allows doxygen to reference variables from the scons environment using $(VARNAME). This is very useful for things like version numbers or for only having certain parts (as defined by scons) included in the documentation without having to mess with doxygen files.
Hierarchical doxygen files just interpret the @Francisco Cabrita key as an include.
I also had trouble with files that started with a key, I fixed that.
The changes are a little longish for putting them in the text, so I attached the changed file doxygen.py_dr_070226. Note that I'm a python newbie, so there are probably more elegant ways to do some of the things I did. Feel free to change them.
Hope it helps.
Note added by anonymous
Replace the line
token = env[token[2:-1]] by
token = env[token[2:token.find(")")]] to suppress wrong warnings when using environment variables in Doxyfile as path. (Like in "
$(MY_LIBRARY)/include")
Note added by Christoph Boehme, cxb632@bham.ac.uk
Robert Lupton noted that you have to change the source paths if you keep your Doxyfile in a subdirectory and use relative paths. I found that I had to do the same for the target path in the Doxyfile. Therefore, I added the following lines after line 160:
if not os.path.isabs(out_dir): conf_dir = os.path.dirname(str(source[0])) out_dir = os.path.join(conf_dir, out_dir)
This is essentially the same code as Robert Lupton's.
Adding tagfile to targets and html templates to sources
The following code adds the tagfile to the target list. I added it in line 166:
# add the tag file if neccessary: tagfile = data.get("GENERATE_TAGFILE", "") if tagfile != "": if not os.path.isabs(tagfile): conf_dir = os.path.dirname(str(source[0])) tagfile = os.path.join(conf_dir, tagfile) targets.append(env.File(tagfile));
To add the html templates from the Doxyfile to the list of sources, you need to apply Robert Lupton's change and add the following snippet in line 137:
# Add additional files to the list ouf source files: def append_additional_source(option): file = data.get(option, "") if file != "": if not os.path.isabs(file): file = os.path.join(conf_dir, file) if os.path.isfile(file): sources.append(file) append_additional_source("HTML_STYLESHEET") append_additional_source("HTML_HEADER") append_additional_source("HTML_FOOTER")
You can easily add dependencies on other output file templates by adding additional calls to append_additional_source().
Addendum 18 July 2007: I added some code to add tagfiles to the list of sources. Since the tagfiles-option allows for equal-signs in the value, I had to change the parsing code a bit. The new code is found in file doxygen.py . This file also includes the other changes I have made.
Note added by Reinderien
I believe that the line
"MAN": ("YES", "man"),
should read
"MAN": ("NO", "man"),
I was getting unnecessary doxygen runs, and
scons --debug-explain showed that doxygen.py thinks the man target is on by default when it isn't.
Updated | https://bitbucket.org/scons/scons/wiki/DoxygenBuilder | CC-MAIN-2018-39 | refinedweb | 1,727 | 59.09 |
Pierre said:
The encoding used by the browser is defined in the Content-Type meta tag, or the content-type header ; if not, the default seems to vary for different browsers. So it's definitely better to define it
The argument stream_encoding used in FieldStorage *must* be this encoding
I say:
I agree it is better to define it. I think you just said the same thing that the page I linked to said, I might not have conveyed that correctly in my paraphrasing. I assume you are talking about the charset of the Content-Type of the form page itself, as served to the browser, as the browser, sadly, doesn't send that charset back with the form data.
Pierre says:
But this raises another problem, when the CGI script has to print the data received. The built-in print() function encodes the string with sys.stdout.encoding, and this will fail if the string can't be encoded with it. It is the case on my PC, where sys.stdout.encoding is cp1252 : it can't handle Arabic or Chinese characters
I say:
I don't think there is any need to override print, especially not builtins.print. It is still true that the HTTP data stream is and should be treated as a binary stream. So the script author is responsible for creating such a binary stream.
The FieldStorage class does not use the print method, so it seems inappropriate to add a parameter to its constructor to create a print method that it doesn't use.
For the convenience of CGI script authors, it would be nice if CGI provided access to the output stream in a useful way... and I agree that because the generation of an output page comes complete with its own encoding, that the output stream encoding parameter should be separate from the stream_encoding parameter required for FieldStorage.
A separate, new function or class for doing that seems appropriate, possibly included in cgi.py, but not in FieldStorage. Message 125100 in this issue describes a class IOMix that I wrote and use for such; codifying it by including it in cgi.py would be fine by me... I've been using it quite successfully for some months now.
The last line of Message 125100 may be true, perhaps a few more methods should be added. However, print is not one of them. I think you'll be pleasantly surprised to discover (as I was, after writing that line) that the builtins.print converts its parameters to str, and writes to stdout, assuming that stdout will do the appropriate encoding. The class IOMix will, in fact, do that appropriate encoding (given an appropriate parameter to its initialization. Perhaps for CGI, a convenience function could be added to IOMix to include the last two code lines after IOMix in the prior message:
@staticmethod
def setup( encoding="UTF-8"):
sys.stdout = IOMix( sys.stdout, encoding )
sys.stderr = IOMix( sys.stderr, encoding )
Note that IOMix allows the users choice of output stream encoding, applies it to both stdout and stderr, which both need it, and also allows the user to generate binary directly (if sending back a file, for example), as both bytes and str are accepted.
print can be used with a file= parameter in 3.x which your implementation doesn't permit, and which could be used to write to other files by a CGI script, so I really, really don't think we want to override builtins.print without the file= parameter, and specifically tying it to stdout.
My message 126075 still needs to be included in your next patch. | https://bugs.python.org/msg126152 | CC-MAIN-2021-17 | refinedweb | 607 | 62.17 |
Hi - This is a follow up from my last post:
but need help with my code. I would like to test that accessing the HW reg using a struct unionized with a UINT32 word, will actually use 32-bits. Here is my code:
...but when I run gcc, I get the following errors:...but when I run gcc, I get the following errors:Code:
#include <stdio.h>
typedef union
{
struct {
/// Reserved - read as 0
unsigned reserved : 30;
/** This bit contains the interrupt status. Reading this register
* clears this bit.
*/
unsigned stats : 1;
/** This field masks the interrupt.
* 0 - Masked
* 1 - Enabled
*/
unsigned maskn : 1;
} bits;
unsigned long word;
} volatile RegStruct;
RegStruct *hwReg = (RegStruct *)0x08002000
void SetReg0(void)
{
int newVal = 0;
RegStruct val;
val.word = hwReg->word;
val.bits.maskn = newVal;
hwReg->word = val.word;
}
void SetReg1(void)
{
int newVal = 1;
RegStruct val;
val.word = hwReg->word;
val.bits.maskn = newVal;
hwReg->word = val.word;
}
int main(void)
{
RegStruct reg;
SetReg0();
int x = reg.bits.maskn;
printf("This is setting to 0: %d \n",x);
SetReg1();
int y = reg.bits.maskn;
printf("This is setting to 1 : %d \n", y);
return 0;
}
testdriver2.c:31: error: syntax error before 'void'
testdriver2.c:35: error: syntax error before '.' token
Why is this simple test failing? Please help. | http://cboard.cprogramming.com/c-programming/110137-accessing-hw-registers-using-bit-fields-uint32-printable-thread.html | CC-MAIN-2014-52 | refinedweb | 215 | 78.96 |
Red Hat Bugzilla – Bug 117140
Backspace recover deleted text
Last modified: 2007-11-30 17:10:37 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040115
Description of problem:
Backspace on Canna clients sometimes recover deleted text instead
of cut tail of the input.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Start kinput2
2. Press 'b' 10 times.
3. Press backspace key 5 times.
4. Press 'b' 1 time.
5. Press backspace key 1 time.
Actual Results: Recover deleted text.
Expected Results: Delete 1 char at tail.
Additional info:
This bug happen on all Canna clients. kinput2-canna,
nvi-canna(RHL9), xemacs, im-canna().
I can't seem to reproduce with with a stock RHEL 3 WS (Japanese
Install with all RHN updates). Maybe I don't understand your
instructions to reproduce. Using it's gnome-terminal, and typing the
sequence after an "echo" command, and pressing return (which commits
the pre-edit sequence), it seems output only five "ã£", which is what
I expected the pre-edit buffer to contain.
Hmm, I could reproduce this problem once, but it seems not always,
because I can't reproduce it right now.
OK, if I remove ~/.canna, it works correctly. So if Adrian had ~/.canna,
you would see the same problem. If user don't have ~/.canna, the
kinput2 candidate for 'shou' is 12 line, buf with ~/.canna, which
useradd generates, 24 line.
The default ~/.canna might have the problem.
I think Tagoh also changed it to use kana key binding, right?
(bhuang@redhat.com is still in QA contact...)
It seems to happen if you enable break-into-roman. and if .canna is
missing, it doesn't appear so that break-into-roman is disabled by
default.
comment 3: Nakai, could you attach a sample .canna file to this bug so
other QA testers can easily reproduce? Thanks!
make sure your .canna file includes (setq break-into-roman t) to
reproduce this problem.
Sane guys would use /etc/skel/.canna for test instead of my copy.
Fixed in 3.7p1-5. | https://bugzilla.redhat.com/show_bug.cgi?id=117140 | CC-MAIN-2016-44 | refinedweb | 360 | 70.8 |
And now for somethings to read over the weekend, if you have some spare time that is.
162: A Swig of Butts
In this episode, we discuss Marc Benioff’s comments on regulating social media, Larry Ellison’s comments on outcompeting Salesforce in SaaS, using namespaces, and collecting debug logs for unauthenticated site users.
Salesforce development plugins, part 1 — Illuminated Cloud
Today we’re starting a 2-part series of guest posts from the creators of IntelliJ IDEA plugins for Salesforce (or Force.com) developers. The plugins are Illuminated Cloud 2 and JetForcer.
The Humble Book Bundle: Mobile App Development by Packt
It’s time to apply yourself. Hone your mobile app development skills with our new bundle from Packt! Add a sweet stack of knowledge to your digital library with titles like Mastering iOS 11 Programming, Android Development with Kotlin, React and React Native, and lots more. Plus, learn from a selection of videos including iOS 11 Programming with Swift, Mastering Kotlin for Android Development, and five more courses of over two hours each.
Progressive web apps
Progressive web apps take traditional web sites/applications — with all of the advantages the web brings — and adds a number of features that give them the many of the same user experience advantages as native apps. This set of docs tells you all you need to know.
Developing the Star Wars opening crawl in HTML/CSS
Humble Book Bundle: Sous Geek Cookbooks! | https://wipdeveloper.com/around-web-20180202/ | CC-MAIN-2019-39 | refinedweb | 240 | 58.52 |
im learning python language atm and i got the following problem.
class room:
guest = None
guestnumber = None
def guest(self):
self.guest = input()
def guestnumber(self):
self.guestnumber = (i+1)
test = []
for i in range(4):
test.append(room())
test[i].guest()
test[i].guestnumber()
for x in range (len(test)):
print('{0} is registered as guest {1}'.format((test[x].guest),(test[x].guestnumber)))
if i put this all in one page this script works for me. but when i put the class in a other .py file and i import the class to the rest of the code, i get an error because the i from the for loop is not defined in the class. how can i use that i variable in the class? | http://forums.devshed.com/python-programming/945279-problem-last-post.html | CC-MAIN-2015-18 | refinedweb | 127 | 84.88 |
Event handling is essentially a process in which one object can notify other objects that an event has occurred. This process is largely encapsulated by multicast delegates, which have this ability built in.
The .NET Framework provides many event-handling delegates, but you can write your own. For example:
delegate void MoveEventHandler(object source, MoveEventArgs e);
By convention, the delegate's first parameter denotes the source of the event, and the delegate's second parameter derives from System.EventArgs and contains data about the event.
The EventArgs class may be derived from to include information relevant to a particular event:
public class MoveEventArgs : EventArgs { public int newPosition; public bool cancel; public MoveEventArgs(int newPosition) { this.newPosition = newPosition; } }
A class or struct can declare an event by applying the event modifier to a delegate field. In this example, the slider class has a Position property that fires a Move event whenever its Position changes:
class Slider { int position; public event MoveEventHandler Move; public int Position { get { return position; } set { if (position != value) { // if position changed if (Move != null) { // if invocation list not empty MoveEventArgs args = new MoveEventArgs(value); Move(this, args); // fire event if (args.cancel) return; } position = value; } } } }
The event keyword promotes encapsulation by ensuring that only the += and -= operations can be performed on the delegate. Other classes may act on the event, but only the Slider can invoke the delegate (fire the event) or clear the delegate's invocation list.
We are able to act on an event by adding an event handler to it. An event handler is a delegate that wraps the method we want invoked when the event is fired.
In this example, we want our Form to act on changes made to a Slider's Position. This is done by creating a MoveEventHandler delegate that wraps our event-handling method (the slider_Move method). This delegate is added to the Move event's existing list of MoveEventHandlers (which is initially empty). Changing the position on the slider fires the Move event, which invokes our slider_Move method:
using System; class Form { static void Main( ) { Slider slider = new Slider( ); // register with the Move event slider.Move += new MoveEventHandler(slider_Move); slider.Position = 20; slider.Position = 60; } static void slider_Move(object source, MoveEventArgs e) { if(e.newPosition < 50) Console.WriteLine("OK"); else { e.cancel = true; Console.WriteLine("Can't go that high!"); } } }
Typically, the Slider class is extended so that it fires the Move event whenever its Position is changed by a mouse movement, keypress, etc.
attributes? unsafe? access-modifier? [ [[sealed | abstract]? override] | new? [virtual | static]? ]? event delegate type event-property accessor-name { attributes? add statement-block attributes? remove statement-block }
Similar to the way properties provide controlled access to fields, event accessors provide controlled access to an event. Consider the following field declaration:
public event MoveEventHandler Move;
Except for the underscore prefix added to the field (to avoid a name collision), this is semantically identical to:
private MoveEventHandler _Move; public event MoveEventHandler Move { add { _Move += value; } remove { _Move -= value; } }
The ability to specify a custom implementation of add and remove handlers for an event allows a class to proxy an event generated by another class, thus acting as a relay for an event rather than the generator of that event. Another advantage of this technique is to eliminate the need to store a delegate as a field, which can be costly in terms of storage space. For instance, a class with 100 event fields would store 100 delegate fields, even though maybe only 4 of those events are actually assigned. Instead, you can store these delegates in a dictionary, and add and remove the delegates from that dictionary (assuming the dictionary holding 4 elements uses less storage space than 100 delegate references). | http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+I+Programming+with+C/Chapter+4.+Advanced+C+Features/4.4+Events/ | crawl-001 | refinedweb | 623 | 53.31 |
Difference between revisions of "View-builder vocabulary"
Revision as of 17:41, 15 May 2012
{{#eclipseproject:technology.higgins|eclipse_custom_style.css}} Vocabulary to describe a view of a context. A view is a tree structure of groups of attributes to be displayed/edited. Part of the Persona Data Model 2.0.
Contents
Files
- Most recently published
- SVN source: view-builder.n3
UML Overview
Classes
Group
A logical group of Slots and/or sub-Groups
- subClassOf: ViewNode
- 1..1 skos:prefLabel
- 1..N child
- 0..1 layoutRule
MultiGroup
Similar to a regular Group, a MultiGroup is parent for a set of child Slots. Different from a Group, it also has an view-builder:attribute attriubte. Another difference is that a MultiGroup self-replicates such that there end up being N instances of itself--one for each of the N values of its attribute attribute. Each of these N values is an instance of p:Person (p1, p2, ... pN). The view-builder:attribute attributes of its child Slot should be evaluated within the scope of its parent p:Person p1..pN.
It is used in view that need to display a set of N "credit cards" (really p:Person nodes with p:Buyer roles), or a set of N "addresses" (really p:Person nodes).
- subClassOf: Group
- 1..1 attribute
- 0..1 app-data:script
Slot
Metadata about how an attribute from the Persona vocabulary (or one of its imports) or from the Flat Persona vocabulary or from the rdf (e.g. rdf:class) should be presented.
- subClassOf: ViewNode
- 1..1 attribute
- 0..1 app-data:script
ViewNode
(Abstract) A node in the view hierarchy
- 1..1 displayOrder
- 0..1 layout
Attributes
attribute
The attribute that should be displayed in this Slot, or the attribute to evaluate when handling a MultiGroup.
- domain: Slot
- value: rdf:Property
child
A child of a Group
- domain: Group
- value: ViewNode
displayOrder
A number that indicates the relative position with respect to the displayOrder of sibling objects with this same attribute. Lower numbers indicate precedence.
- domain: Slot or Group
- value: xsd:Integer
layout
An optional URL of an html/javascript template sub-page of a web app.
- domain: ViewNode
- value: URI
layoutRule
Indicates what layout algorithm to use to layout its contents (child Slots & Groups). If no value is provided then a Vbox is assumed.
- domain: Group
- value: LayoutRule
script
After the user edits a slot, this script should be executed. (e.g. to push the updated value to an external website).
- domain: Slot
- value: app-data:Push
Displaying Groups
Each Group MUST have a skos:prefLabel that describes the internationalized string label for the group.
A Group is a logical collection of Slots (and sub-groups). It is entirely up to the look & feel of the UI to determine how this grouping (and sub-grouping) is represented.
There is a special kind of (non-top level) Group called a MultiGroup. Unlike regular Groups it has (like a Slot) a view-builder:attribute. As described in the MultiGroup class above, a MultiGroup is a pattern that describes how its child Slots should be collected together into N groups. N is the number of values of the MultiGroup's view-builder:attribute attribute. This attribute's values are always p:Person instances. Each of these p:Person instances acts as a source of attribute/values for one of these N groups.
MultiGroups are used to display a set of attributes (e.g. credit card number, name, address, phone, etc.) for each of N p:Person entities.:
:AgeRange rdf:type view:Slot ; view:attribute fp:ageRange ; view:displayOrder 1 . :DemographicsGroup rdf:type view:Group ; view:child :PostalCode , :AgeRange , :Gender ; view:displayOrder 1 ; skos:prefLabel "Demographics"^^xsd:string . :Gender rdf:type view:Slot ; view:attribute fp:gender ; view:displayOrder 2 . :Interest rdf:type view:Slot ; view:attribute p:like ; view:displayOrder 1 . :InterestsGroup rdf:type view:Group ; view:child :Interest ; view:displayOrder 2 ; skos:prefLabel "Interests"^^xsd:string . :PostalCode rdf:type view:Slot ; view:attribute fp:postalCode ; view:displayOrder 10 . :ViewRoot rdf:type view:Group ; view:child :InterestsGroup , :DemographicsGroup ; view:displayOrder 1 ; skos:prefLabel "Advertising Profile"^^xsd:string .
The demographics group shown above has slots whose attributes, fp:gender, fp:ageRange, fp:postalCode, are all from the Flat Persona vocabulary.
The interests group shown above has a slot whose attribute is the complex valued attribute, online-behavior:interest. In this rather special case the presentation logic should render the name of the class of the instance (e.g. "Humor > Satire" or "World Destinations > Africa > East Africa > Nigeria") as opposed to rendering the value of an attribute of the instance as is done in the usual case.
A few additional notes on the above example
The FlatPersona namespace is only useful when either (a) there are not multiple entity values of the same attribute being requested, or, (b) there are just a few alternative entities and the "right" one can be figured out by matching using the role tags (e.g. "home" vCard entity vs. "work" vCard entity). Flattening, a simplification. It is an attempt to hide some complexity. But there are times when we can't hide the complexity. Multiple non-role-tagged entity values of an attribute is one of those times.
Displaying the "interests" of a user is a case where the flat persona approach won't work because a single p:Person node has a multi-valued (multi-entity-valued) "p:like" attribute. You'll notice in the view hierarchy under the interests Group there is a Slot whose view:attribute attribute whose value is p:like--a value that is from the persona vocabulary and NOT from the flat persona vocabulary. We have to get all of the values of this attribute. These values are (multiple) int:InterestTopic-class entities. The specific class of this InterestTopic might be "Satire", but of course we probably want to display something like "Humor > Satire" --in other words rendering not just the "leaf" class name but where it sits in relation to the (up to 6 levels) of super-classes ("Humor" being the super-class of "Satire" in this case). | http://wiki.eclipse.org/index.php?title=View-builder_vocabulary&diff=prev&oldid=302322 | CC-MAIN-2015-35 | refinedweb | 1,018 | 57.57 |
The Braaain 1.0
Halloween time again. Last year I tried to do a simple little hardware project to make my emoticon pumpkins glow. That’s cute and all but not very difficult.
This year, I decided to work on this idea I’ve had for more than a year and a half. The Brain – a silicone based brain with controllable LEDs inside. I have some ideas of what to do next with it, but this first iteration is just to be a fun decoration for Halloween.
Why The Brain
Back when Interlock was moving into their current space, there was this cool area in the center of it, that was surrounded by windows. That turned into the network room and I really wanted to use that window for something. Show something cool in the window or whatever. I came up with this idea that I would have a brain to represent network activity. When a host goes down, the brain reflects that. If there was no Internet connectivity, the brain would show that too. The first version of the brain is not to that level yet, but it’s in the right direction.
Brain 1.0
Brain 1.0 is a Platsil GEL-10, silicone brain with a hollowed center. In the center is a plastic project box housing an Arduino with Neo Pixels attached to it. A Neo Pixel is an Adafruit project that is meant to be a low cost, multi-color LED that you can daisy-chain, or string together in-line. There’s really no reason to use Neo Pixels for this project besides the fact that I had some already.
Parts:
- Halloween Brain Jello mold from Amazon
- Platsil GEL-10 from BITY Mold Supply
- Tupperware container donated to the cause
- XL Breaking Bad meth making gloves
- Mixing containers
Making the Brain:
This was the most interesting part to me. I picked up the type of PlatSil that is a 1 to 1 compound either by volume or by mass so I didn’t need to worry about mixing too much. I took 500ML of A and mixed it with 500ML of B. This stuff has a 6 minute lifetime from the time you start mixing to the time it starts to harden. There are ways to slow this down, but again, I didn’t need to do that. I spent 2 minutes mixing because some guy on YouTube said this is important, and my recent adventures in Thermite taught me the lesson that they’re serious.
Before I poured it in, I used a can of Pol-Ease 2300 release which is used to keep the brain separated from the Jello mold. I was reminded the hard way what happens when you forget this. Pouring it into the mold was pretty simple but I made a small clay holder for it so I could make sure it stayed level. After the contents were dumped in, I sunk the plastic project container that was going to be my hollowed inside.
The whole things hardens within 30 minutes but because mine was in the garage in October, it was more like an hour.
Because this stuff isn’t very cheap, I did a demo mold just to make sure I was on the right track.
PlatSil Gel-10:
My goal was to create a mold of a brain that was rubbery and brain like. This Platsil line of chemicals are designed to create molds for other things. There wasn’t a lot of people making actual things from the material itself but I really like the texture and toughness of using it as the model. I will say that it is 100% overkill for what I wanted. There’s probably someone that can recommend a better, cheaper, alternative but for me this worked in the time frame I needed it to. They have a bunch of different types and I really wanted light to diffuse through it so I got that translucent version. It still comes out pretty white depending on how thick of a mold you’re making.
Neo Pixel:
Neo Pixels are really slick. They have 4 leads on them. Power, Ground, signal in, and a signal out. The biggest benefit is that each pixel is individually addressable without the need for multiple connections. Pixel 0 connects to pixel 1 that connects to pixel N through a single wire connected to your microcontroller or whatever you’re using.
Power takes +5v, and there is a warning about memory consumption especially with smaller Arduinos and extremely long chains of Neo Pixels (up to 500 at 30 FPS). My 4 didn’t mind.
Adafruit has a Neo Pixel library that you can use pretty easily, even if you just want to hack one of their demos.
Arduino:
This is my hacked code to make the brain throb between red and pink. Again, a Neo Pixel is overkill for doing this but it’s fun none-the-less and I’ll be upgrading it next iteration.
#include <Adafruit_NeoPixel.h> //Hacked from the original Adafruit library demo #define PIN 6 //my control pin //); void setup() { strip.begin(); strip.show(); // Initialize all pixels to 'off' } void loop() { //Start out with a pink brain looking color colorWipe(strip.Color(255, 48, 48), 1); // Hot Pink //Throb read and then fade out heartThrob) { //secret rainbow mode uint16_t i, j; for(j=0; j<256; j++) { for(i=0; i<strip.numPixels(); i++) { strip.setPixelColor(i, Wheel((i+j) & 255)); } strip.show(); delay(wait); } } void heartThrob(uint8_t wait) { uint16_t i, j; //Adjust 60 and 90 to the starting and ending colors you want to fade between. for(j=60; j<90; j++) { for(i=0; i<strip.numPixels(); i++) { strip.setPixelColor(i, Wheel((i); } }
from interlockroc on November 2nd, 2013 | http://www.interlockroc.org/2013/11/02/the-braaain-1-0/ | CC-MAIN-2015-11 | refinedweb | 968 | 72.16 |
Opened 6 years ago
Closed 6 years ago
#19161 closed Bug (fixed)
Missing clean_password in custom user documentation
Description (last modified by )
class UserChangeForm(forms.ModelForm): """A form for updateing users. Includes all the fields on the user, but replaces the password field with admin's pasword hash display field. """ password = ReadOnlyPasswordHashField() class Meta: model = MyUser
Needs the following added:
def clean_password(self): # Regardless of what the user provides, return the initial value. # This is done here, rather than on the field, because the # field does not have access to the initial value return self.initial["password"]
Or you get a not-null constraint violation on form submit.
Change History (6)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
comment:3 Changed 6 years ago by
comment:4 Changed 6 years ago by
Actually, this should be a release blocker, since it's a fundamentally misleading piece of documentation in a new feature.
comment:5 Changed 6 years ago by
comment:6 Changed 6 years ago by
Note: See TracTickets for help on using tickets.
Fixed formatting (please use preview). | https://code.djangoproject.com/ticket/19161 | CC-MAIN-2018-43 | refinedweb | 186 | 52.09 |
I'm getting either a null pointer, or some other kind of problem. So far, in this program, I only have the multiply function completed to the point where it ought to work. However, either in my toString method or my function call, i think I've got something wrong. The problem may be elsewhere, too, but I think it's just there. You have to enter a value first, then you select multiply on the case statement. I have to do this part first so I can test to make sure the code I've got works. Without further ado:
My main:
Code :
import java.util.Scanner; public class main { public static void main(String [] args) { Scanner scan = new Scanner(System.in); int x = 0; String choice = null; String entry = null; String answer = null; Number n = new Number(); Number m = new Number(); while (x != 1) { /* This is a simple menu, which feeds into a switch statement. The switch * has to be done with a literal or an integer, so we first parse the * string given by the scanner to just the character at position 0 (the * first spot). It switches accordingly. */ System.out.print("Enter a value: e Add a value: a\n" + "Subtract: s Multiply: m\n" + "Reverse sign: r Clear: c\n" + "Quit: q\n -->" ); choice = scan.next(); char choice1 = choice.charAt(0); switch(choice1) { case 'e': System.out.print("Please enter a number: "); entry = scan.next(); n = new Number(entry); break; case 'a': break; case 's': break; case 'm': System.out.print("Please enter a number: "); entry = scan.next(); m = new Number(entry); n = m.multiply(n); answer = n.toString(); System.out.println("" + answer); break; case 'r': n.reverseSign(); break; case 'c': n = new Number(); break; case 'q': x = 1; break; default: System.out.println("Invalid selection, please try again"); } } } }
My Number Class
Code :
public class Number { private Node low, high; private int digitCount = 0; private int decimalPlaces = 0; private boolean negative = false; public Number()// constructor { low = null; high = null; } // while I'm pretty sure that to enter a value, you don't need to worry about this // this constructor is needed when you are subtracting or doing any other elementary // operation. without it, it wouldn't be able to make it a linked list. public Number(String str) { accept(str); } public void accept(String str) { int length = str.length(); // in case of a decimal number, we have to make sure we're ready for it. we do that by parsing a float, then // later casting it to an integer. we use this only to see if x is greater than 0. int x = (int)Float.parseFloat(str); // if the integer is less than 0, it is negative // and we flip the sign, else we do nothing if (x < 0) { System.out.println("" + x); reverseSign(); } for(int i = 0; i < length; i++) { char place = str.charAt(i); /* I'm checking to see if the character at the place is a * digit. If it is, I cast it as an integer, which it is, * and send it to the insertNode method, which adds it as a node. */ if(Character.isDigit(place)) { insertNodeBack((int)place); } else if(place == '.') { decimalPlaces = length - i; } else { //Do nothing, it's a negative sign. The loop will increment. } } } public Number add(Number n) { return n; //. Returns a Number which represents "this + n". } public Number subtract(Number n) { return n; //. Returns a Number which represents "this - n". } public Number multiply(Number n) { int newDigit; Number product = new Number(); Node npointer = n.high; while (npointer != null) { Number partialProduct = new Number(); int carry = 0; Node current = this.low; while(current != null) { newDigit = npointer.getValue() * current.getValue(); carry = newDigit / 10; newDigit = newDigit %10; partialProduct.digitCount++; current = current.previous; if(carry != 0) { partialProduct.insertNodeFront(carry); partialProduct.digitCount++; } product.insertNodeBack(0); product.digitCount++; product = product.addAbsolute(partialProduct); npointer = npointer.next; product.decimalPlaces = this.decimalPlaces + n.decimalPlaces; } } return product; } public void reverseSign() //This method changes the sign of the number represented by the linked list. { if (negative == true) negative = false; else negative = true; } //Returns a String representation of the Number (so it can be displayed by System.out.print()). public String toString() { int beforeDecimal = digitCount - decimalPlaces, digit; Node current = this.high; String solution = null; if(negative) solution += '-'; for(int i = 0; i < beforeDecimal; i++) { digit = current.getValue(); solution += digit; current = current.next; } solution += '.'; for(int i = digitCount - beforeDecimal; i < digitCount; i++) { digit = current.getValue(); solution += digit; current = current.next; } return solution; } /* Because of the way I've set up the linked list, the first piece added will always be the highest * order node. Thus, this method is the same to just adding a last node every time. When this method is called * we already know if we have a negative or not, from the parsing done in accept. If the list isn't empty, we simply * set the pointer for the low node to the node we create. */ private void insertNodeBack(int value) { Node newNode = new Node(value); if (isEmpty()) this.high = newNode; else { low.next = newNode; newNode.previous = low; } this.low = newNode; } /*This method tells if the list is empty or not.*/ private boolean isEmpty() { return high == null; } private Number addAbsolute(Number n) { Number sum = new Number(); int carry = 0; int added; Node thisPointer = this.low; Node nPointer = n.low; while (thisPointer != null) { added = thisPointer.getValue();// + nPointer.getValue(); carry = added / 10; added = added %10; sum.digitCount++; thisPointer = thisPointer.previous; nPointer = nPointer.previous; if(carry != 0) { sum.insertNodeFront(carry); sum.digitCount++; } //set ap value for decimal places in sum } return sum; } /* Well, because of the functions we use, we have to properly be able to add to the head of the list * instead of just the back. To do this, we'll do pretty much the same thing as we did on insertNodeTail, * however, we'll insert to the front. */ private void insertNodeFront(int value) { Node newNode = new Node(value); if (isEmpty()) low = newNode; //its both high and low if list is empty else { high.previous = newNode; newNode.next = high; // reassign the pointers for existing high } high = newNode; //override existing high } private int compareToAbsolute(Number n) { return decimalPlaces; } private Number subtractAbsolute(Number n) { Number difference = new Number(); int borrow = 0, newDigit; Node thisPointer = this.low; Node nPointer = n.low; while (thisPointer != null) { newDigit = thisPointer.getValue() - nPointer.getValue(); if (newDigit < 0) { newDigit +=10; borrow = 1; } else { borrow = 0; } difference.insertNodeFront(newDigit); difference.digitCount++; //set app value for dec. places } return difference; } }
And my Node class
Code :
public class Node { public int data; public Node next; public Node previous; public Node(int value) { data = value; } public int getValue() { return data; } }
Any help is greatly appreciated. It's my shot at what is supposed to be a calculator capable of the functions provided. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/5472-null-pointers-printingthethread.html | CC-MAIN-2015-11 | refinedweb | 1,120 | 59.4 |
Ref returns and ref locals
Starting with C# 7. Any changes made to its value by the called method are observed by the caller. A reference return value means that a method returns a reference (or an alias) to some variable. That variable's scope must include the method. That variable's can't have the return type
void.
There are some restrictions on the expression that a method can return as a reference return value. Restrictions. Returning. Violating this rule generates compiler error CS8156, "An expression cannot be used in this context because it may not be returned by reference."
In addition, reference return values are not allowed on async methods. An asynchronous method may return before it has finished execution, while its return value is still unknown.
Defining a ref return value
A method that returns a reference return value must satisfy the following two conditions:
- The method signature includes the ref keyword in front of the return type.
- Each return statement in the method body includes the ref keyword in front of the name of the returned instance.
The following example shows a method that satisfies those conditions and returns a reference to a
Person object named
p:
public ref Person GetContactInformation(string fname, string lname) { // ...method implementation....
The
ref keyword is used both before the local variable declaration and before the method call.;
The
ref keyword is used both before the local variable declaration and before the value in the second example. Failure to include both
ref keywords in the variable declaration and assignment in both examples results in compiler error CS8172, "Cannot initialize a by-reference variable with a value."
Prior to C# 7.3, ref local variables couldn't be reassigned to refer to different storage after being initialized. That restriction has been removed. The following example shows a reassignment:
ref VeryLargeStruct reflocal = ref veryLargeStruct; // initialization refLocal = ref anotherVeryLargeStruct; // reassigned, refLocal refers to different storage.
Ref local variables must still be initialized when they are declared..Join(" ", numbers); }
The following example calls the
NumberStore.FindNumber method to retrieve the first value that is greater than or equal to 16. The caller then doubles the value returned by the method. The output from the example shows the change reflected in the value of the array elements of the
NumberStore instance. performed by returning the index of the array element along with its value. The caller can then use this index to modify the value in a separate method call. However, the caller can also modify the index to access and possibly modify other array values.
The following example shows how the
FindNumber method could be rewritten after
C# 7.3 to use ref local reassignment:
using System; class NumberStore { int[] numbers = { 1, 3, 7, 15, 31, 63, 127, 255, 511, 1023 }; public ref int FindNumber(int target) { ref int returnVal = ref numbers[0]; var ctr = numbers.Length - 1; while ((ctr > 0) && numbers[ctr] >= target) { returnVal = ref numbers[ctr]; ctr--; } return ref returnVal; } public override string ToString() => string.Join(" ", numbers); }
This second version is more efficient with longer sequences in scenarios where the number sought is closer to the end of the array.
See also
ref keyword
Reference Semantics with Value Types | https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/ref-returns | CC-MAIN-2018-34 | refinedweb | 536 | 54.93 |
If you've been hanging out on the Java block for any amount of time then you've likely heard of Groovy. The brainchild of superstar developers James Strachan and Bob McWhirter, Groovy is an agile development language that is based entirely on the Java programming APIs. Groovy is currently in the beginning phase of its Java Specification Request, which was approved in late March of 2004. Groovy is also the scripting language that some claim will forever change the way that you view and utilize the Java platform.
In his opening comments to JSR 241 (see Resources) Groovy co-specification lead Richard Monson-Haefel said that he based his support for Groovy on the conviction that the time has come for the Java platform to include an agile development language. Unlike the many scripting languages that exist as ports to the Java platform, Groovy was written for the JRE. With the request for specification (see Resources), the makers of Groovy have put forth the idea that (in the words of Monson-Haefel) "Java is more than a programming language; it's a robust platform upon which multiple languages can operate and co-exist."
I'll devote this second installment of the new alt.lang.jre column to getting to know Groovy. I'll start with some answers to the most obvious question about this new language (why do you need it?), and then embark on a code-based overview of some of Groovy's most exciting features.
As you learned in last month's column, Groovy isn't the only scripting language that is compliant with the JRE. Python, Ruby, and Smalltalk are just three examples of scripting languages that have been successfully ported to the Java platform. For some developers, this begs the question: Why another language? After all, many of us already combine our Java code with Jython or JRuby for faster application development; why should you learn another language? The answer is that you don't have to learn a new language to code with Groovy. Groovy differentiates itself from the other JRE-compliant scripting languages with its syntax and reuse of standard Java libraries. Whereas Jython and JRuby share the look and feel of their ancestors (Python and Ruby, respectively), Groovy feels like the Java language with far fewer restrictions.
Whereas languages like Jython build upon their parents' libraries, Groovy employs the features and libraries Java developers are most familiar with -- but puts them in an agile development framework. The fundamental tenets of agile development are that code should be well suited to a wide range of tasks and applicable in a variety of ways. Groovy lives up to these tenets by:
- Freeing developers from compilation
- Permitting dynamic types
- Easing syntactical constructs
- Allowing its scripts to be used inside normal Java applications
- Providing a shell interpreter
These features make Groovy a remarkably easy language to learn and use, whether you're a seasoned Java developer or newcomer to the Java platform. In the sections that follow, I'll discuss the above mentioned highlights of Groovy in detail.
Like many scripting languages, Groovy saves compilation for runtime. This means that Groovy scripts are interpreted when they are run, much like JavaScript is interpreted by the browser when a Web page is viewed. Runtime evaluation comes at a cost in terms of execution speed, which could rule out the use of scripting languages in performance intensive projects, but compilation-free coding offers tremendous advantages when it comes to the build-and-run cycle. Runtime compilation makes Groovy an ideal platform for rapid prototyping, building various utilities, and testing frameworks.
For example, running the script Emailer.groovyin Groovy is as easy as typing
groovy Emailer.groovy on a command line. If you wanted to run the same Java file (Emailer.java) you would, of course, have to type an extra command:
javac Emailer.java, followed by
java Emailer. While this might seem trivial, you can easily imagine the advantage of runtime compilation in a larger context of application development.
As you will see shortly, Groovy also permits scripts to drop a main method in order to statically run an associated application.
As with other mainstream scripting languages, Groovy does not require the explicit typing of formal languages such as C++ and the Java language. In Groovy, an object's type is discovered dynamically at runtime, which greatly reduces the amount of code you have to write. You can see this, first, by studying the simple examples in Listings 1 and 2.
Listing 1 shows how you declare a local variable as a
String in the Java language. Note that the type, name, and value must all be declared.
Listing 1. Java static typing
In Listing 2, you see the same declaration but without the need to declare the variable type.
Listing 2. Groovy dynamic typing
You may have also noticed that I was able to drop the semicolon from the declaration in Listing 2. Dynamic types have dramatic consequences when defining methods and their associated parameters: Polymorphism takes on a whole new meaning! In fact, with dynamic typing, you can have all the power of polymorphism without inheritance. In Listing 3, you can really begin to see the role of dynamic typing in Groovy's flexibility.
Listing 3. More Groovy dynamic typing
Here, I've defined two Groovy classes,
Song and
Book, which I'll discuss further in a moment. Both classes contain a
name property. I've also defined a function,
doSomething, that takes a
thing and attempts to print the object's
name property.
Because the
doSomething function does not define a type for its input parameter, any object will work so long as the object contains a
name property. So, in Listing 4, you see what happens when you use both instances of
Song and
Book as input to
doSomething.
Listing 4. Playing around with dynamic typing
In addition to demonstrating dynamic typing in Groovy, Listing 4 also reveals, in its last two lines, how easy it is to create a reference to a function. This is because everything in Groovy is an object, including functions.
The final thing you should note about Groovy's dynamic type declaration is that it results in fewer
import statements. While Groovy does require imports for explicitly utilized types, those imports can be aliased to provide for shorter names.
The next two examples will pull together everything I've discussed so far about dynamic types in Groovy. Both the Java code group and the Groovy code group below make use of Freemarker (see Resources), an open source template engine. Both groups simply create a
Template object from a directory and file name, then print the corresponding object's content to standard out; the difference, of course, is in the amount of code each group requires to handle its tasks.
Listing 5. Simple TemplateReader Java class
At first glance, the Java code in Listing 5 is quite simple -- especially if you've never seen scripting code before. Fortunately, you've got a Groovy contrast in Listing 6. Now this code is simple!
Listing 6. An even simpler TemplateReader in Groovy
The Groovy code is half as long as the Java code; here's why:
- The Groovy code requires half as many
importstatements. Notice also, that
freemarker.template.Configurationwas aliased to
tconfenabling shorthand syntax.
- Groovy permits the variable
tmplof type
Templateto drop its type declaration.
- Groovy does not require a
classdeclaration or a
mainmethod.
- Groovy does not care about any corresponding exceptions, allowing you to drop the import of
IOExceptionrequired in the Java code.
Now, before you move on, think about the last Java class you coded. You probably had to write a lot of imports and declared types followed by an equal number of semicolons. Think about what it would be like to use Groovy to craft the same code. You'd have a far more concise syntax at your disposal, not so many rules to satisfy, and you'd end up with the exact same behavior.
And to think, you're just getting started ...
Extremely flexible syntax
When it comes to syntax, flexibility is the primary ingredient that lets you develop code more efficiently. Much like its influential counterparts (Python, Ruby, and Smalltalk), Groovy greatly simplifies the core library usage and constructs of the language on which it's modeled, which in this case is the Java language. To give you an idea of just how flexible Groovy's syntax is, I'll show you some of its primary constructs; namely classes, functions (via the
def keyword), closures, collections, ranges, maps, and iterators.
At the bytecode level, Groovy classes are real Java classes. What's different, however, is that Groovy defaults everything defined in a class to
public, unless a specific access modifier has been defined. Moreover, dynamic typing applies to fields and methods, and
return statements are not required.
You can see an example of class definition in Groovy in Listing 7, where class
Dog has a
getFullName method that actually returns a
String representing the
Dog's fullname. All methods, consequently, are implicitly
public.
Listing 7. Example Groovy class: Dog
In Listing 8, you take things one step further, with a
DogOwner class
that has two properties,
fname and
lname. Simple
so far!
Listing 8. Example Groovy class: DogOwner
In Listing 9, you use Groovy to set properties and call methods on your
Dog and
DogOwner instances. It should be obvious, by now, how much easier it is to work with Groovy classes than with Java classes. While the
new keyword is required, types are optional and setting properties (which are implicitly public) is quite effortless.
Listing 9. Using Groovy classes
Notice how the
getFullName method defined in
your
Dog class returns a
String object, which
in this case is "
Mollie Waldo."
In addition to designating all objects as first class, which many scripting languages do (see sidebar), Groovy also lets you create first class functions, which are, in essence, objects themselves. These are declared with the
def keyword and exist outside a class definition. You've actually already seen how you use the
def keyword to define a first class function, in Listing 3 and seen a function used in Listing 4. Groovy's first class functions are extremely handy when it comes to defining simple scripts.
One of the most exciting and powerful features found in Groovy is its support for closures. Closures are first class objects that are similar to anonymous inner classes found in the Java language. Both closures and anonymous inner classes are executable blocks of code; however there are some subtle differences between the two. State is automatically passed in and out of closures. Closures can have names. They can be reused. And, most important and true to Groovy, closures are infinitely more flexible than anonymous inner classes!
Listing 10 demonstrates just how powerful closures are. The new and improved
Dog class in the listing includes a
train method that actually executes the closure with which the
Dog instance was created.
Listing 10. Using closures
What's more, closures can also accept parameters. As demonstrated in Listing 11, the
postRequest closure accepts two parameters (
location and
xml) and uses the Jakarta Commons HttpClient library (see Resources) to post an XML document to a given location. The closure then returns a
String representing the response. Below the closure definition is an example of using it; in fact, calling a closure is just like calling a function.
Listing 11. Using closures with parameters
Grouping objects into data structures such as lists and maps is a fundamental coding task, something most of us do on a daily basis. Like most languages, Groovy has defined a rich library to manage these types of collections. If you've ever dabbled in Python or Ruby, Groovy's collections syntax should be familiar. Creating a list is quite similar to creating an array in the Java language, as shown in Listing 12. (Notice how the list's second item is automatically autoboxed into an
Integer type.)
Listing 12. Using collections
In addition to making lists easier to work with, Groovy adds a few new methods on collections. These methods make it possible, for example, to
count occurrences of values;
join an entire list together; and
sort a list with the greatest of ease. You can see these collections methods in action in Listing 13.
Listing 13. Working with Groovy collections
Maps
Like lists, maps are data structures that are remarkably easy to work with in Groovy. The map in Listing 14 contains two objects, the keys being
name and
date. Notice that you can access the values in different ways.
Listing 14. Working with maps
Ranges
When working with collections, you'll likely find yourself making ample use of
Ranges. A
Range is actually an intuitive notion and easy to pick up, as it allows one to create a list of sequential values inclusively or exclusively. You use two dots (
..) to declare an inclusive range and three dots (
...) to declare an exclusive one, as shown in Listing 15.
Listing 15. Working with ranges
Looping with ranges
Ranges allow for a rather slick notion when it comes to looping constructs. In Listing 16, with
aRange defined as an exclusive
range, the loop will print a, b, c, and d.
Listing 16. Looping with ranges
Additional features of collections
If you're unfamiliar with Python and other scripting languages, then some of the additional features found in Groovy's collections will impress you. For example, once you've created a collection, you can use negative numbers to count backwards in a list, as shown in Listing 17.
Listing 17. Negative indexing
Groovy also allows you to slice lists using ranges. Slicing lets you obtain a precise subset of a list, as demonstrated in Listing 18.
Listing 18. Slicing with ranges
Collections ala Ruby
Groovy collections can also act like Ruby collections, if you want them to. You can use Ruby-like syntax to append elements with the
<< syntax, concatenate
with
+, and perform set difference on collections
with
-; moreover, you can handle repetition of
collections with the
* syntax as shown in Listing 19.
Also note that you can use
== to compare collections.
Listing 19. Ruby-style collections
In Groovy, it's quite easy to iterate over any sequence. All you need to iterate over the sequence of characters is a simple
for loop, as shown in Listing 20. (As you may also have noticed by now, Groovy provides a much more natural
for loop syntax than traditional pre Java 1.5.)
Listing 20. Iterator example
Most objects in Groovy have methods such as
each and
find that accept closures. Using closures to iterate over objects creates a number of exciting possibilities, as demonstrated in Listing 21.
Listing 21. Closures with iterators
In Listing 21, the method
each acts as an iterator. In this case, your closure adds the values of the elements, leaving
val at 6 when complete. The
find method is fairly simple, too. Each iteration passes in the element. In this case, you simply test to see if the value is 3.
So far I've focused on the basic aspects of working with Groovy,
but there's far more to this language than the basics! I'll wrap up with a look at some of the more advanced development features Groovy has to offer, including Groovy-style JavaBeans components, file IO, regular expressions, and compilation with
groovyc.
Invariably, applications end up employing struct-like objects representing real world entities. On the Java platform, you call these objects JavaBeans components, and they're commonly used as business objects representing orders, customers, resources, etc. Groovy simplifies the coding of JavaBeans components with its handy shorthand syntax, and also by automatically providing a constructor once you've defined the properties of a desired bean. The result, of course, is greatly reduced code, as you can see for yourself in Listing 22 and 23.
In Listing 22 you see a simple JavaBeans component that has been defined in the Java language.
Listing 22. A simple JavaBean component
In Listing 23, you see what happens when this bean gets Groovy. All you have to do is define your properties, and Groovy automatically gives you a nice constructor to work with. Groovy also gives you quite a bit of flexibility when it comes to manipulating instances of
LavaLamp. For instance, we can use Groovy's shorthand syntax or the traditional wordy Java language syntax to manipulate the properties of your bean.
Listing 23. A Groovier JavaBeans component
Groovy IO operations are a breeze, especially when combined with iterators and closures. Groovy takes standard Java objects such as
File,
Reader, and
Writer and enhances them with additional methods that accept closures. In Listing 24, for example, you see the traditional
java.io.File, but with the addition of the handy
eachLine method.
Listing 24. Groovy IO
Because a file is essentially a sequence of lines, characters, etc., you can iterate over them quite simply. The
eachLine method accepts a closure and iterates over each line of the file, in this case
File-IO-Example.txt. Using closures in this manner is quite powerful, because Groovy ensures all file resources are closed, regardless of any exceptions. This means you can do file IO without lots of
try/
catch/
finally clauses!
Groovy scripts are actually Java classes on the byte code level. As a result, you can easily compile a Groovy script using
groovyc.
groovyc can be utilized via the command line or
Ant to produce class files for scripts. These classes can be run with the normal
java command, provided that the classpath includes
groovy.jar and
asm.jar, which is ObjectWeb's bytecode manipulation framework . See Resources to learn more about compiling Groovy.
No language would be worth its salt without regular expression handling. Groovy uses the Java platform's
java.util.regex library -- but with a few essential tweaks. For example, Groovy lets you create
Pattern objects with the
~ expression and
Matcher objects with the
=~ expression, as shown in Listing 25.
Listing 25. Groovy RegEx
You may have noticed that you were able to define the
String,
str in the above listing, without having to add end quotes and
+'s for each new line. This is because Groovy has relaxed the normal Java constraints that would have required multiple string concatenations. Running this Groovy script will print
true for your match of
water and then print out a stanza with all occurrences of "
every where" replaced with "
nowhere."
Like any project in its infancy, Groovy is a work in progress. Developers accustomed to working with Ruby and Python (or Jython) might miss the convenience of such features as mixins, script import (although it's possible to compile the desired importable script into its corresponding Java class), and named parameters for method calls. But Groovy is definitely a language on the move. It will likely incorporate these features and more as its developer base grows.
In the meantime, Groovy has a lot going for it. It.
Like the other languages discussed in this series, Groovy is not a replacement for the Java language but an alternative to it. Unlike the other languages discussed here, Groovy is seeking Java specification, which means it has the potential to share equal footing with the Java language on the Java platform.
In this month's installment of alt.lang.jre, you've learned about the basic framework and syntax of Groovy, along with some of its more advanced programming features. Next month will be devoted to one of the most well-loved scripting languages for Java developers: JRuby.
- The new alt.lang.jre series launched last month, with Barry Feigenbaum's installment "Get to know Jython" (developerWorks, July 2004).
- Download Groovy from the Groovy open source project page, where you can also learn more about such topics as compilation, unit testing, regular expressions, and more.
- "JSR 241: The Groovy programming language" can be found on the Java Community Process homepage.
- Get an overview of the thought process behind Groovy, with James Strachan's "Groovy -- the birth of a new dynamic language for the Java platform" (Radio Userland, James Strachan's Weblog, August 2003).
- Read more of Richard Monson-Haefel's thoughts on Groovy, on the java.net weblogs page.
- One of Groovy's most powerful features is its agility. Learn more about the underlying principles of agile development (or XP) with Roy Miller's "Demystifying extreme programming" (developerWorks, August 2002).
- Richard Hightower and Nicholas Lesiecki's Java tools for extreme programming is a practitioner's guide to agile development on the Java platform, including a chapter on "Building Java applications with Ant" (excerpted for developerWorks, July 2002).
- Learn more about building Java (and hence Groovy) applications with Ant, with Malcolm Davis's "Incremental development with Ant and JUnit" (developerWorks, November 2000).
- In "Automating the build and test process" (developerWorks, August 2001), Erik Hatcher shows you how Ant and JUnit can be combined to bring you one step closer to XP nirvana.
- Maven is an alternative to Ant that works especially well for project management tasks. Learn more about Maven with Charles Chan's "Project management: Maven makes it easy" (developerWorks, April 2003).
- Aspect-oriented programming is an agile development technique for building highly decoupled and extensible enterprise systems. Learn more about AOP with Andrew Glover's "AOP banishes the tight-coupling blues" (developerWorks, February 2004).
- The open source template engine Freemarker was incorporated in both the Java code and the Groovy code blocks in Listing 6.
- The Jakarta Commons HttpClient library was featured in Listing 11.
- Groovy wouldn't be where it is today without the powerful influence of such languages as Python and Ruby.
- You'll find articles about every aspect of Java programming in the developerWorks Java technology zone.
- Browse for books on these and other technical topics.
- Also see the Java technology zone tutorials page for a complete listing of free Java-focused tutorials from developerWorks.
Andrew Glover is the President of Stelligent Incorporated, a Washington, DC, metro area company specializing in the construction of automated testing frameworks, which lower software bug counts, reduce integration and testing times, and improve overall code stability. | http://www.ibm.com/developerworks/java/library/j-alj08034.html | crawl-002 | refinedweb | 3,707 | 61.97 |
I've written an algorithm that scans through a file of "ID's" and compares that value with the value of an integer i (I've converted the integer to a string for comparison, and i've trimmed the "\n" prefix from the line). The algorithm compares these values for each line in the file (each ID). If they are equal, the algorithm increases i by 1 and uses reccurtion with the new value of i. If the value doesnt equal, it compares it to the next line in the file. It does this until it has a value for i that isn't in the file, then returns that value for use as the ID of the next record.
My issue is i have a file of ID's that list 1,3,2 as i removed a record with ID 2, then created a new record. This shows the algorithm to be working correctly, as it gave the new record the ID of 2 which was previously removed. However, when i then create a new record, the next ID is 3, resulting in my ID list reading:
1,3,2,3 instead of
1,3,2,4. Bellow is my algorithm, with the results of the
print() command. I can see where its going wrong but can't work out why. Any ideas?
Algorithm:
def _getAvailableID(iD): i = iD f = open(IDFileName,"r") lines = f.readlines() for line in lines: print("%s,%s,%s"%("i=" + str(i), "ID=" + line[:-1], (str(i) == line[:-1]))) if str(i) == line[:-1]: i += 1 f.close() _getAvailableID(i) return str(i)
Output: (The output for when the algorithm was run for finding an appropriate ID for the record that should have ID of 4):
i=1,ID=1,True i=2,ID=1,False i=2,ID=3,False i=2,ID=2,True i=3,ID=1,False i=3,ID=3,True i=4,ID=1,False i=4,ID=3,False i=4,ID=2,False i=4,ID=2,False i=2,ID=3,False i=2,ID=2,True i=3,ID=1,False i=3,ID=3,True i=4,ID=1,False i=4,ID=3,False i=4,ID=2,False i=4,ID=2,False | http://www.howtobuildsoftware.com/index.php/how-do/eXw/python-algorithm-file-output-python-algorithm-error-when-trying-to-find-the-next-largest-value | CC-MAIN-2017-13 | refinedweb | 383 | 75.03 |
----------------------------------------------------------------- Updated Issues List for Library X3J16/95-0195 WG21/N0795 P.J. Plauger 24 September 1995 -------- Clause 17 Issues ------- 17.3.1.3: A freestanding implementation doesn't include <stdexcept>, which defines class exception, needed by <exception>. Should probably move class exception to <exception>. 17.3.3.1: A C++ program must be allowed to extend the namespace std if only to specialize class numeric_limits. 17.3.4.1: Paragraph 4 is a repeat. -------- Clause 18 Issues ------- 18.2.1: float_rounds_style should be float_round_style (correct once). 18.2.1.1: Paragraph 2 is subsumed by the descriptions of radix, epsilon(), and round_error(). Should be removed here. 18.2.1.1: Paragraph 3 is repeated as 18.2.1.2, paragraph 50, where it belongs. Should be removed here. 18.2.1.1: Should say that numeric_limits<T> must be able to return T(0). Should say that round_style defaults to round_indeterminate, not round_toward_zero. 18.2.1.2: denorm_min() does *not* return the minimum positive normalized value. Should strike the mention of this function in paragraph 2. 18.2.1.2: Paragraph 22 must supply a more precise definition of ``rounding error.'' 18.2.1.2: Paragraph 23 must replace ``is in range'' with ``is a normalized value''. 18.2.1.2: Paragraph 25 must replace ``is in range'' with ``is a normalized value''. 18.2.1.2: Paragraph 27 must replace ``is in range'' with ``is a finite value''. 18.2.1.2: Paragraph 29 must replace ``is in range'' with ``is a finite value''. 18.2.1.2: In paragraph 41, ``flotaing'' should be ``floating''. 18.2.1.3: Semantics must be specified for enum float_round_style. 18.5.1: type_info::operator!=(const type_info&) is ambiguous in the presence of the template operators in <utility>, and it is unnecessary. It should be removed. 18.6.1.1: Paragraph 1 incorrectly states that bad_exception is thrown by the implementation to report a violation of an exception-specification. Such a throw is merely a permissible option. 18.7: There are five Table 28s. -------- Clause 19 Issues ------- 19.1.1: exception(const exception&) should not be declared with the return type exception&. (Error repeated in semantic description.) -------- Clause 20 Issues ------- 20.1: Allocators are described in terms of ``memory models'' which is an undefined concept in Standard C++. The term should be *defined* here as the collection of related types, sizes, etc. in Table 33 that characterize how to allocate, deallocate, and access objects of some managed type. 20.1: Paragraph 3 talks about ``amortized constant time'' for allocator operations, but gives no hint about what parameter it should be constant with respect to. 20.1: a.max_size() is *not* ``the largest positive value of X::difference_type.'' It is the largest valid argument to a.allocate(n). 20.1: Table 33 bears little resemblance to the currently accepted version of class allocator (though it should, if various bugs are fixed, as described later.) Essentially *every* item in the `expression' column is wrong, as well as all the X:: references elsewhere in the table. 20.3: binder1st is a struct in the synopsis, a class later. Should be a class uniformly, like binder2nd. 20.3.5: class unary_negate cannot return anything. Should say that its operator() returns !pred(x). 20.3.6.1: binder1st::value should have type Operation::first_argument_type, not argument_type. 20.3.6.3: binder2nd::value should have type Operation::second_argument_type, not argument_type. 20.3.7: ``Shall'' is inappropriate in a footnote, within a comment, that refers to multiple memory models not even recognized by the Standard. 20.4: return_temporary_buffer shouldn't have a second (T*) parameter. It's not in STL, it was not in the proposal to add it, and it does nothing. 20.4.1: allocator::types<T> shows all typedefs as private. They must be declared public to be usable. 20.4.1: It is not clear from Clause 14 whether explicit template member class specializations can be first declared outside the containing class. Hence, class allocator::types<void> should probably be declared inside class allocator. 20.4.1: The explicit specialization allocator::types<void> should include: typedef const void* const_pointer; It is demonstrably needed from time to time. 20.4.1: Footnote 169 should read ``An implementation,'' not ``In implementation.'' 20.4.1.1: allocator::allocate(size_type, types<U>::const_pointer) has no semantics for the second (hint) parameter. 20.4.1.1: allocator::allocate(size_type, types<U>::const_pointer) requires that all existing calls of the form A::allocate(n) be rewritten as al.allocate<value_type, char>(n, 0) -- a high notational price to pay for rarely used flexibility. If the non-template form of class allocator is retained, an unhinted form should be supplied, so one can write al.allocate<value_type>(n). 20.4.1.1: allocator::allocate(size_type, types<U>::const_pointer) should return neither new T nor new T[n], both of which call the default constructor for T one or more times. Note that deallocate, which follows, calls operator delete(void *), which calls no destructors. Should say it returns operator new((size_type)(n * sizeof (T))). 20.4.1.1: allocator::max_size() has no semantics, and for good reason. For allocator<T>, it knew to return (size_t)(-1) / sizeof (T) -- the largest sensible repetition count for an array of T. But the class is no longer a template class, so there is no longer a T to consult. Barring a general cleanup of class allocator, at the least max_size() must be changed to a template function, callable as either max_size<T>() or max_size(T *). 20.4.1.1: A general cleanup of class allocator can be easily achieved by making it a template class once again: template<class T> class allocator { public: typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; typedef T value_type; pointer address(reference x) const; const_pointer address(const_reference x) const; pointer allocate(size_type n); void deallocate(pointer p); size_type init_page_size() const; size_type max_size() const; }; The default allocator object for a container of type T would then be allocator<T>(). All of the capabilities added with the Nov. '94 changes would still be possible, and users could write replacement allocators with a *much* cleaner interface. 20.4.1.2: operator new(size_t N, allocator& a) can't possibly return a.allocate<char, void>(N, 0). It would attempt to cast the second parameter to allocator::types<void>::const_pointer, which is undefined in the specialization allocator::types<void>. If related problems aren't fixed, the second template argument should be changed from void to char, at the very least. 20.4.1.2: If allocator is made a template class once again, this version of operator new becomes: template<class T> void *operator new(size_t, allocator<T>& a); 20.4.1.3: The example class runtime_allocator supplies a public member allocate(size_t) obviously intended to mask the eponymous function in the base class allocator. The signature must be allocate<T, U>(size_t, types<U>::const_pointer) for that to happen, however. The example illustrates how easy it is to botch designing a replacement for class allocator, given its current complex interface. (The example works as is with the revised template class allocator described earlier.) 20.4.2: raw_storage_iterator<OI, T>::operator*() doesn't return ``a reference to the value to which the iterator points.'' It returns *this. 20.4.3.1: Template function allocate doesn't say how it should ``obtain a typed pointer to an uninitialized memory buffer of a given size.'' Should say that it calls operator new(size_t). 20.4.3.2: Template function deallocate has no semantics. Should say that it calls operator delete(buffer). 20.4.3.5: get_temporary_buffer fails to make clear where it ``finds the largest buffer not greater than ...'' Do two calls in a row ``find'' the same buffer? Should say that the template function allocates the buffer from an unspecified pool of storage (which may be the standard heap). Should also say that the function can fail to allocate any storage at all, in which case the `first' component of the return value is a null pointer. 20.4.3.5: Strike second parameter to return_temporary_buffer, as before. Should say that a null pointer is valid and does nothing. Should also say that the template function renders indeterminate the value stored in p and makes the returned storage available for future calls to get_temporary_buffer. 20.4.4: Footnote 171 talks about ``huge pointers'' and type ``long long.'' Neither concept is defined in the Standard (nor should it be). This and similar comments desperately need rewording. 20.4.4.3: Header should be ``uninitialized_fill_n'', not ``uninitialized_fill.'' 20.4.5: When template class auto_ptr ``holds onto'' a pointer, is that the same as storing its value in a member object? If not, what can it possibly mean? 20.4.5: auto_ptr(auto_ptr&) is supposed to be a template member function. 20.4.5: auto_ptr(auto_ptr&) is supposed to be a template member function. 20.4.5: auto_ptr<T>::operator= should return auto_ptr<T>&, not void, according to the accepted proposal. 20.4.5.1: Need to say that auto_ptr<T>::operator= returns *this. 20.4.5.2: auto_ptr<T>::operator->() doesn't return get()->m -- there is no m. Should probably say that ap->m returns get()->m, for an object ap of class auto_ptr<T>. 20.4.5.2: auto_ptr<T>::release() doesn't say what it returns. Should say it returns the previous value of get(). 20.4.5.2: auto_ptr<T>::reset(X*) doesn't say what it returns, or that it deletes its current pointer. Should say it executes ``delete get()'' and returns its argument. 20.5: The summary of <ctime> excludes the function clock() and the types clock_t and time_t. Is this intentional? -------- Clause 21 Issues ------- 21.1: template function operator+(const basic_string<T,tr,A> lhs, const_pointer rss) should have a second argument of type const T *rhs. 21.1: Paragraph 1 begins, ``In this subclause, we call...'' All first person constructs should be removed. 21.1.1.1: string_char_traits::ne is hardly needed, given the member eq. It should be removed. 21.1.1.1: string_char_traits::char_in is neither necessary nor sufficient. It simply calls is.get(), but it insists on using the basic_istream with the default ios_traits. operator>> for basic_string still has to call is.putback(charT) directly, to put back the delimiter that terminates the input sequence. char_in should be eliminated. 21.1.1.1: string_char_traits::char_out isn't really necessary. It simply calls os.put(), but it insists on using the basic_ostream with the default ios_traits. char_out should be eliminated. 21.1.1.1: string_char_traits::is_del has no provision for specifying a locale, even though isspace, which it is supposed to call, is notoriously locale dependent. is_del should be eliminated, and operator>> for strings should stop on isspace, using the istream locale, as does the null-terminated string extractor in basic_istream. 21.1.1.1: string_char_traits is missing three important speed-up functions, the generalizations of memchr, memmove, and memset. Nearly all the mutator functions in basic_string can be expressed as calls to these three primitives, to good advantage. 21.1.1.2: No explanation is given for why the descriptions of the members of template class string_char_traits are ``default definitions.'' If it is meant to suggest that the program can supply an explicit specialization, provided the specialization satisfies the semantics of the class, then the text should say so (here and several other places as well). 21.1.1.2: string_char_traits::eos should not be required to return the result of the default constructor char_type() (when specialized). Either the specific requirement should be relaxed or the function should be eliminated. 21.1.1.2: string_char_traits::char_in, if retained, should not be required to return is >> a, since this skips arbitrary whitespace. The proper return value is is.get(). 21.1.1.2: string_char_traits::is_del, if retained, needs to specify the locale in effect when it calls isspace(a). 21.1.1.3: Paragraph 1 doesn't say enough about the properties of a ``char-like object.'' It should say that it doesn't need to be constructed or destroyed (otherwise, the primitives in string_char_traits are woefully inadequate). string_char_traits::assign (and copy) must suffice either to copy or initialize a char_like element. The definition should also say that an allocator must have the same definitions for the types size_type, difference_type, pointer, const_pointer, reference, and const_reference as class allocator::types<charT> (again because string_char_traits has no provision for funny address types). 21.1.1.4: The copy constructor for basic_string should be replaced by two constructors: basic_string(const basic_string& str); basic_string(const_basic_string& str, size_type pos, size_type n = npos, Allocator& = Allocator()); The copy constructor should copy the allocator object, unless explicitly stated otherwise. 21.1.1.4: basic_string(const charT*, size_type n, Allocator&) should be required to throw length_error if n > max_size(). Should say: Requires: s shall not be a null pointer n <= max_size() Throws: length_error if n > max_size(). 21.1.1.4: basic_string(size_type n, charT, Allocator&) is required to throw length_error if n == npos. Should say: Requires: n <= max_size() Throws: length_error if n > max_size(). 21.1.16: basic_string::size() Notes says the member function ``Uses traits::length(). There is no reason for this degree of overspecification. The comment should be struck. 21.1.1.6: basic_string::resize should throw length_error for n >= max_size(), not n == npos. 21.1.1.6: resize(size_type) should not have a Returns clause -- it's a void function. Clause should be labeled Effects. 21.1.1.6: resize(size_type) should call resize(n, charT()), not resize(n, eos()). 21.1.16: basic_string::resize(size_type) Notes says the member function ``Uses traits::eos(). It should actually use charT() instead. The comment should be struck. 21.1.1.6: basic_string::reserve says in its Notes clause, ``It is guaranteed that...'' A non-normative clause cannot make guarantees. Since the guarantee is important, it should be labeled differently. (This is one of many Notes clauses that make statements that should be normative, throughout the description of basic_string.) 21.1.1.8.2: basic_string::append(size_type n, charT c) should return append(basic_string(n, c)). Arguments are reversed. 21.1.1.8.3: basic_string::assign(size_type n, charT c) should return assign(basic_string(n, c)). Arguments are reversed. 21.1.1.8.4: basic_string::insert(size_type n, charT c) should return insert(basic_string(n, c)). Arguments are reversed. 21.1.1.8.4: basic_string::insert(iterator p, charT c) should not return p, which may well be invalidated by the insertion. It should return the new iterator that designates the inserted character. 21.1.1.8.4: basic_string::insert(iterator, size_type, charT) should return void, not iterator. (There is no Returns clause, luckily.) 21.1.1.8.5: basic_string::remove(iterator) says it ``calls the character's destructor'' for the removed character. This is pure fabrication, since constructors and destructors are called nowhere else, for elements of the controlled sequence, in the management of the basic_string class. The words should be struck. 21.1.1.8.5: basic_string::remove(iterator, iterator) says it ``calls the character's destructor'' for the removed character(s). This is pure fabrication, since constructors and destructors are called nowhere else, for elements of the controlled sequence, in the management of the basic_string class. The words should be struck. 21.1.1.8.5: basic_string::remove(iterator, iterator) Complexity says ``the destructor is called a number of times ...'' This is pure fabrication, since constructors and destructors are called nowhere else, for elements of the controlled sequence, in the management of the basic_string class. The Complexity clause should be struck. 21.1.1.8.6: replace(size_type pos1, size_type, const basic_string&,...) Effects has the expression ``size() - &pos1.'' It should be ``size() - pos1.'' 21.1.1.8.6: basic_string::replace(size_type, size_type n, charT c) should return replace(pos, n, basic_string(n, c)). Arguments are reversed. 21.1.1.8.8: basic_string::swap Complexity says ``Constant time.'' It doesn't say with respect to what. Should probably say, ``with respect to the lengths of the two strings, assuming that their two allocator objects compare equal.'' (This assumes added wording describing how to compare two allocator objects for equality.) 21.1.1.9.1: basic_string::find(const charT*, ...) Returns has a comma missing before pos argument. 21.1.1.9.8: basic_string::compare has nonsensical semantics. Unfortunately, the last version approved, in July '94 Resolution 16, is also nonsensical in a different way. The description should be restored to the earlier version, which at least has the virtue of capturing the intent of the original string class proposal: 1) If n is less than str.size() it is replaced by str.size(). 2) Compare the smaller of n and size() - pos with traits::compare. 3) If that result is nonzero, return it. 4) Otherwise, return negative for size() - pos < n, zero for size() - pos == n, or positive for size() - pos > n. 21.1.1.10.3: operator!=(const basic_string&, const basic_string&) is ambiguous in the presence of the template operators in <utility>, and it is unnecessary. It should be removed. 21.1.1.10.5: operator>(const basic_string&, const basic_string&) is ambiguous in the presence of the template operators in <utility>, and it is unnecessary. It should be removed. 21.1.1.10.6: operator<=(const basic_string&, const basic_string&) is ambiguous in the presence of the template operators in <utility>, and it is unnecessary. It should be removed. 21.1.1.10.7: operator>=(const basic_string&, const basic_string&) is ambiguous in the presence of the template operators in <utility>, and it is unnecessary. It should be removed. 21.1.1.10.7: operator>= with const charT* rhs should return lhs >= basic_string(rhs), not <=. 21.1.1.10.8: Semantics of operator>> for basic_string are vacuous. Should be modeled after those for earlier string class. 21.1.1.10.8: Semantics of operator<< for basic_string are vacuous. Should be modeled after those for earlier string class. 21.1.1.10.8: getline for basic_string reflects none of the changes adopted by July '94 resolution 26. It should not fail if a line exactly fills, and it should set failbit if it *extracts* no characters, not if it *appends* no characters. Should be changed to match 27.6.1.3. 21.1.1.10.8: getline for basic_string says that extraction stops when npos - 1 characters are extracted. The proper value is str.max_size() (which is less than allocator.max_size(), but shouldn't be constrained more precisely than that). Should be changed. 21.2: There are five Table 44s. 21.2: <cstring> doesn't define size_type. Should be size_t. -------- Clause 22 Issues ------- 22.1: template operator<<(basic_ostream, const locale&) as well as template operator>>(basic_ostream, const locale&) now have a second template argument (for ios traits) added without approval. While this change may be a good idea, it should be applied uniformly (which has not happened), and only after committee approval. 22.1: The four template classes money_get, money_put, moneypunct, and moneypunct_byname should all have a second template parameter ``bool International''. 22.1.1: locale::category is defined as type unsigned. For compatibility with C, it should be type int. 22.1.1: Class locale is missing the constructor: locale(const locale& other, category) (Added Nov. '94). It should be added. 22.1.1: Class locale has the constructor locale::locale(const locale& other, const locale& one, category). I can find no resolution that calls for this constructor to be added. 22.1.1: Example of use of num_put has silly arguments. First argument should be ostreambuf_iterator(s.rdbuf()). 22.1.1: Paragraph 8 says that locale::transparent() has unspecified behavior when imbued on a stream or installed as the global locale. There is no good reason why this should be so and several reasons why the behavior should be clearly defined. The sentence should be struck. 22.1.1: Paragraph 9 says that ``cach[e]ing results from calls to locale facet member functions during calls to iostream inserters and extractors, and in streambufs between calls to basic_streambuf::imbue, is explicitly supported.'' In the case of inserters and extractors, this behavior follows directly from paragraph 8. No need to say it again. For basic_streambuf, the draft can (and should) say explicitly that the stream buffer fixates on a facet at imbue time and ignores any subsequent changes that might occur in the delivered facet until the next imbue time (if then). (An adequate lifetime for the facet can be assured by having the basic_streambuf object memorize a copy of a locale object directly containing the facet, as well as a pointer to the facet, for greater lookup speed.) In any event, saying something ``is explicitly supported'' doesn't make the behavior *required.* The paragraph should be struck, and words added to the description of basic_streambuf to clarify the lifetime of an imbued codecvt facet. (More words are needed here anyway, for other reasons.) 22.1.1.1.1: locale::category doesn't list the value none. 22.1.1.1.1: Table 46 lists the ctype facets codecvt<char, wchar_t, mbstate_t> and codecvt<wchar_t, char, mbstate_t> as being essential, but what about codecvt<char, char, mbstate_t>? Should say that this facet must be present and must cause no conversion. 22.1.1.1.1: Table 46 omits the second (Intl) type parameter on all money_* facets. The number of listed facets should thus double (one set for false and the other for true.) 22.1.1.1.1: Table 46, and paragraph 3 following, identify the facets that implement each locale category (in the C library sense). But these words offer no guidance as to what facets should be present in the default locale (locale::classic()). The template classes listed each represent an unbounded set of possible facets. Should list the following *31* explicit instantiations of the templates as being required: collate<char> collate<wchar_t> ctype<char> ctype<wchar_t> codecvt<char, char, mbstate_t> codecvt<char, wchar_t, mbstate_t> codecvt<wchar_t, char, mbstate_t> moneypunct<char, false> moneypunct<wchar_t, false> moneypunct<char, true> moneypunct<wchar_t, true> money_get<char, false, istreambuf_iterator<char> > money_get<wchar_t, false, istreambuf_iterator<wchar_t> > money_put<char, false, ostreambuf_iterator<char> > money_put<wchar_t, false, ostreambuf_iterator<wchar_t> > money_get<char, true, istreambuf_iterator<char> > money_get<wchar_t, true, istreambuf_iterator<wchar_t> > money_put<char, true, ostreambuf_iterator<char> > money_put<wchar_t, true, ostreambuf_iterator<wchar_t> > num_get<char, istreambuf_iterator<char> > num_get<wchar_t, istreambuf_iterator<wchar_t> > num_put<char, ostreambuf_iterator<char> > num_put<wchar_t, ostreambuf_iterator<wchar_t> > numpunct<char> numpunct<wchar_t> time_get<char, istreambuf_iterator<char> > time_get<wchar_t, istreambuf_iterator<wchar_t> > time_put<char, ostreambuf_iterator<char> > time_put<wchar_t, ostreambuf_iterator<wchar_t> > messages<char> messages<wchar_t> 22.1.1.1.3: Paragraph 1 of description of class locale::id is not a sentence. Should read, ``The class locale::id provides ...'' (??) 22.1.1.2: As mentioned earlier, locale::locale(const locale&, const locale&, category) has been added without approval. It should be struck. 22.1.1.3: Behavior of locale::use() is not defined if facet is present in the locale. Should say it returns that facet. 22.1.1.3: Description of locale::use() Effects contains a nonsense statement: ``Because locale objects are immutable, subsequent calls to use<Facet>() return the same object, regardless of changes to the global locale.'' If a locale object is immutable, then changes to the global locale should *always* shine through, for any facet that is not present in the *this locale object. If the intent is to mandate cacheing semantics, as sketched out in the original locales proposal, this sentence doesn't quite succeed. Nor should it. Cacheing of facets found in the global locale leads to horribly unpredictable behavior, is unnecessary, and subverts practically any attempt to restore compatibility with past C++ practice and the current C Standard. The sentence should be struck. 22.1.1.3: Description of locale::use Notes uses the term ``value semantics'' and the verb ``to last.'' Are either of these terms defined within the Standard? The sentence should be reworded, or struck since it's non-normative anyway. 22.1.1.3: locale::use should not be required to return true after a locale::has call for the same facet, for reasons described elsewhere. The sentence should be struck. 22.1.1.4: operator<<(basic_ostream, const locale&) has a nonsense definition, since loc.name() cannot be inserted into an ostream with arbitrary element type. Should be confined to work just with ostream, or struck as needless frippery. 22.1.1.4: operator>>(basic_istream, loc&) should have a second parameter of type locale&. The Effects must also clarify what is meant by ``read a line'' and ``construct a locale from it.'' Should probably say it calls getline(s, str) for some string str. If that succeeds it then executes loc = locale(str.c_str()). But then how should it convert from a non-char string to a char string? Should be confined to work with just istream, or struck as needless frippery. 22.1.1.4: operator>>(basic_istream, loc&) should specify how constructing a locale can fail, if that is truly possible. 22.1.1.5: locale::global has the clause ``Replaces ::setlocale().'' Does this mean that a conforming implementation must *not* define the function setlocale? Better to say that global behaves as if it calls ::setlocale(). 22.1.1.5: locale::transparent() Notes says ``The effect of imbuing this locale into an iostreams component is unspecified.'' If this is a normative statement, it doesn't belong in a Notes clause. And if it's intended to be normative, it should be struck. Imbuing a stream with locale::transparent() is the *only* way to restore the behavior of iostreams to that in effect for *every* C++ programming running today. It is also essential in providing compatible behavior with the C Standard. The sentence should be struck. 22.2.1.1: ctype::narrow(charT, char) should return int, not char, to avoid sign-extension problems. It should also have a default value for dfault, such as '\0'. 22.2.1.1: ctype::do_scan_is/do_scan_not should return const charT*, not const char*. 22.2.1.1: ctype::do_narrow(charT, char) should return int, not char, to avoid sign-extension problems. 22.2.1.1.2: ctype::do_is(ctype_mask, charT) Returns is already described under Effects. 22.2.1.1.2: ctype::do_is(const charT*, etc.) should return high. 22.2.1.1.2: ctype::do_narrow does not have to require that digit have successive codes. This is true of all character sets in C. 22.2.1.3: ctype<char>::tolower/toupper/widen(char) should return int, not char, to avoid sign-extension problems. 22.2.1.3: ctype<char>::narrow(charT, char) should return int, not char, to avoid sign-extension problems. It should also have a default value for dfault, such as '\0'. 22.2.1.3: ctype<char>::do_tolower/do_toupper(char) should return int, not char, to avoid sign-extension problems. 22.2.1.3.2: ctype<char> names table_, classic_table_ and delete_it_ have names that suggest ``exposition only'' (with trailing underscores), are sometimes shown as protected and other times shown as private, and are grossly overspecified in the bargain. The ``classic table'', if needed, can always be obtained from the "C" locale. It should not be required in every ctype<char> object. The delete flag is of interest only to the destructor and should not be exposed. The table is of interest only to the is* functions, which have no virtuals underneath. Hence, *none* of these objects should be part of the external interface for the class. 22.2.1.3.2: ctype<char>::do_is(const char*, etc.) should return high. 22.2.1.3.3: ctype<char> describes this subclause as ``overridden virtual functions,'' but they're not. A template specialization has nothing to do with any virtuals declared in the template. Should be renamed. (Should also have some words added, since there are none.) 22.2.1.4: Description of codecvt, paragraph 3, fails to make clear how an implementation can ``provide instantiations for <char, wchar_t, mbstate_t> and <wchar_t, char, mbstate_t>.'' Must specializations be written for the template? If so, must they also have virtuals that do the actual work? Or can the implementation add to the default locale facets derived from the template, overriding the virtual do_convert? Needs to be clarified. 22.2.1.4: Implementations should also be required to provide an instantiation of codecvt<char, char, mbstate_t> which transforms characters one for one (preferably by returning noconv). It is needed for the very common case, basic_filebuf<char>. 22.2.1.4.2: codecvt::do_convert uses pointer triples (from, from_next, from_end) and (to, to_next, to_end) where only pairs are needed. Since the function ``always leaves the from_next and to_next pointers pointing one beoond the last character successfully converted,'' the function must be sure to copy from to from_next and to to to_next early on. A better interface would eliminate the from and to pointers. 22.2.1.4.2: codecvt::do_convert says ``If no translttion is needed (returns noconv), sets to_next equal to argument to.'' The previous paragraph strongly suggests that the function should also set from_next to from. Presumably, the program will call do_convert once, with nothing to convert. If it returns noconv, the program will omit future calls to do_convert. If that is the intended usage, then it should be permissible to call any instance of do_convert with (mostly) null pointers, to simplify such enquiries -- and the wording should make clear how to make such a test call. If that is not the intended usage, then a result value should be added that permits such a report, so adaptive programs can omit future calls to convert that do nothing. 22.2.1.4.2: codecvt::do_convert Notes says that the function ``Does not write into *to_limit.'' Since there is no requirement that converted characters be written into sequentially increasing locations starting at to, this is largely toothless. Effects clause should be written more precisely. 22.2.1.4.2: codecvt::do_convert Returns says that the function returns partial if it ``ran out of space in the destination.'' But the function mbrtowc, for example, can consume the remaining source characters, beyond the last delivered character, and absorb them into the state. It would be a pity to require do_convert to undo this work. Should say that partial can also mean the function ran out of source characters partway through a conversion. Then clarify that, after a return of partial, the next call to do_convert should begin with any characters between from_next and from_end, of which there might be none. 22.2.2: Description of num_get and num_put say that ``implementations are allowed to delegate extraction of smaller types to extractors for larger types, but are not required to do so.'' This suggests that a user replacement for num_get has to assume that get(long double) might be called for *all* extractions. What if a program doesn't want integers to have decimal points and exponents? Implementor latitude should be more restricted. 22.2.2.1: Template class num_get defines the type ios as basic_ios<charT>, which it then uses widely to characterize parameters. Thus, this facet can be used *only* with iostreams classes that use the default traits ios_traits<charT>. Since the use of num_get is mandated for *all* basic_istream classes, this restriction rules out essentially any substitution of traits. Best fix is to make the ios parameter an ios_base parameter on all the do_get calls, then change ios accordingly. This is sufficient if setstate is moved to ios_base as proposed elsewhere. But it requires further fiddling for num_put if fill is moved *out* of ios_base, as also proposed. Must be fixed, one way or another. 22.2.2.1.2: num_get_do_get Effects says the functions can call str.setstate, which may throw ios_base::failure. How is such an exception to be distinguished from one thrown by a streambuf virtual? The latter must set badbit, which might even throw another exception. Needs to be clarified. 22.2.2.1.2: num_get_do_get Notes says ``if present, digit grouping is checked after the entire number is read.'' Does this mean ``47,,,'' might be consumed if comma is the thousands separator? Also says, ``When reading a non-numeric boolean value, the names are compared exactly.'' What if the names are ``abc'' and ``abcde''? Which one wins if the input is ``abcde''? What if the input is ``abcdx''? Desperately needs clarifying. 22.2.2.2: Template class num_put defines the type ios as basic_ios<charT>, which it then uses widely to characterize parameters. Thus, this facet can be used *only* with iostreams classes that use the default traits ios_traits<charT>. Since the use of num_put is mandated for *all* basic_ostream classes, this restriction rules out essentially any substitution of traits. Best fix is to make the ios parameter an ios_base parameter on all the do_put calls, then change ios accordingly. This is sufficient if setttate is moved to ios_base as proposed elsewhere. But it requires further fiddling for num_put if fill is moved *out* of ios_base, as also proposed. Must be fixed, one way or another. 22.2.3.1: The syntax specified for numeric values is out of place in numpunct. 22.2.3.1: Description of numpunct says, ``For parsing, if the digits portion contains no thousands-separators, no grouping constraint is applied.'' This suggests that thousands-separators are permitted in an input sequence, and that the grouping constraint is applied, but it is profoundly unclear on how this might be done. Allowing thousands- separators at all in input is risky -- requiring that grouping constraints be checked is an outrageous burden on implementors, for a payoff of questionable utility and desirability. Should remove any requirement for recognizing thoudands-separators and grouping on input. And the effect on output needs considerable clarification. 22.2.3.1.1: Declaring vector<char> as the return type of numpunct::grouping significantly adds to the number of header lines that must be read for practically any program, yet produces little or no payoff over string. Should be changed to string. 22.2.3.1.2: numpunct::do_grouping has an encoding that differs arbitrarily from that for C. It is also not as expressive as the C encoding. Should return a string whose c_str() has the same meaning as in C. 22.2.4: Template classes collate, time_get, time_put, money_get, money_put, money_punct, messages, and their support classes still have only sketchy semantics -- over a year after they were originally accepted into the draft. They are based on little or no prior art, and they present specification problems that can be addressed properly only with detailed descriptions, which do not seem to be forthcoming. Even if adequate wording were to magically appear on short notice, the public still deserves the courtesy of a proper review. For all these reasons, and more, the remainder of clause 22 from this point on should be struck. 22.2.4.1.2: collate::transform should be collate::do_transform. 22.2.4.1.2: collate::hash should be collate::do_hash. 22.2.5.1: time_get/put::ios should be changed from basic_ios<charT> to ios_base, same as num_get and num_put. 22.2.5.1.2: time_get::do_date_order refers to ``the date formst specified by 'X','' but that is a time format (e.g. 06:55:15) in strftime. Should probably be 'x'. Presumably, it also refers to the behavior of strftime when the global locale is set to match the locale for which the time_get facet was created, but that is not stated. 22.2.5.3.2: time_put::do_put should have a default last argument (modifier = 0). 22.2.5.3.2: time_put::do_put talks about a format ``modifier, interpreted identically as the format specifiers in the string argument to the standard library function strftime().'' But that function has no modifiers. Modifiers should be struck from put and do_put functions. 22.2.6.1.2: money_get::do_get talks about ``the format pattern'' used to specify how to read characters, but fails to specify what that pattern might be. It is hard to select between pos_format and neg_format when the sign is one of the items to be read -- and the placement of the sign is determined by the choice of format(!) Should either say that pos_format is always used, or permit a more general input format. 22.2.6.1.2: money_get::do_get talks says ``optional whitespace is consumed'' but fails to specify how whitespace is defined. Should probably say that ctype<charT>.is(ctype_base::space, ch) returns true for a whitespace character ch. (Should also use a similar recipe for what constitutes a ``digit,'' but this is less important.) 22.2.6.3.1: moneypunct::decimal_point and thousands_sep return charT, while numpunct equivalents return strings. Should be reconciled. 22.2.7.1: messages_base::THE_POSIX_CATALOG_IDENTIFIER_TYPE is not a defined type. 22.2.7.1: The name `messages' differs from the C name `message' in a small and arbitrary way that will lead to numerous errors in future. Names should be reconciled. 22.2.7.1: ``typedef int catalog'' in template class messages hides type catalog in messages_base. Probably should be struck. 22.2.7.1: messages::open first argument is supposed to be `const char *' not `const basic_string<char>&.' See March '95 Resolution #19. 22.2.7.1.2: messages::do_get should at least specify a reliable way to get the messages no and yes from the C locale. As it stands, it is worthless in a portable program. 22.3: Table 48 omits several LC_* macro names, for no apparent reason. -------- Clause 23 Issues ------- 23.1: Table 50 should also list the required constructors X(al) and X(a, al), for al an object of type Allocator. 23.1: Copy constructors for all containers should copy the allocator. A separate constructor, with a required allocator second argument, should be added to permit the introduction of a new allocator when copying a container. (Same is true for template class string.) 23.1.1: Table 52 should make clear that a.insert(p, t) returns a pointer to the newly inserted element. 23.1.1: Table 53 should list a.at(n) for all containers that supply a[n] (vector, deque). 23.1.2: Table 54 lists a_uniq.insert(t) for functionality that is now merged with a.insert(t). 23.1.2: Table 54 should describe a.insert(t) much the same as a.insert(p, t), with regard to inserting unique elements. It should also clarify that ``multi'' versions return iterator and non-multi versions return pair. 23.2.1: The comments on the member functions operator~ and operator bool of bitset::reference are erroneous. 23.2.1: bitset::set(size_t pos, int val = 1) should be bitset::set(size_t pos, bool val = true). 23.2.1: bitset is missing: const bool operator[](size_t pos) const; reference at(size_t pos); bool at(size_t pos) const; 23.2.1.3: bitset<N> operator& should return bitset(lhs) &= rhs, not &= pos. 23.2.1.3: bitset<N> operator| should return bitset(lhs) |= rhs, not |= pos. 23.2.1.3: bitset<N> operator^ should return bitset(lhs) ^= rhs, not ^= pos. 23.2.2: deque::iterator and deque::const_iterator are incorrectly declared in terms of Allocator types. They must be classes peculiar to deque. 23.2.3.1: list::iterator and list::const_iterator are incorrectly declared in terms of Allocator types. They must be classes peculiar to list. 23.2.3: list::reverse_iterator and list::const_reverse_iterator should be defined in terms of reverse_bidirectional_iterator, not reverse_iterator. 23.2.3.7: list::splice Effects should say ``The result is unchanged if...'' not ``The result is unchanged is ...'' 23.2.3.7: list::unique Effects doesn't say what happnns with binary_pred in the template form. Should say that the predicate for removal is either operator= or binary_pred. 23.2.3.7: list::unique does not apply the binary predicate ``Exactly size() - 1'' times if size() is zero. Should qualify the statement for non-empty lists only. 23.2.3.7: list::merge doesn't state the ordering criteria for either version of the two functions. 23.2.3.7: list::merge doesn't state the ordering criteria for either version of the two functions, at least not with sufficient completeness. 23.2.5.2: Constructor vector() should be explicit vector(Allocator = allocator) 23.2.5.2: Constructor `explicit vector(size_type, const T& = T())' should be explicit vector(size_type, const T& = T(), Allocator = allocator) 23.2.5.2: Constructor `template<class II> vector(II, II)' should be template<class II> vector(II, II, Allocator = allocator) 23.2.5.4: vector::resize should insert sz-s.size() elements, not s.size()-sz. (I believe this text is replicated for all resize descriptions.) 23.2.5.6: vector::insert template cannot meet the stated complexity requirements (originally intended for a random_access_iterator) when the template class parameter InputIterator is truly an input_iterator. They need to be *carefully* rethought. (See 23.2.5.2 for the handling of vector::vector template.) 23.2.6: vector<bool, allocator>::const_reference should be bool, not const reference. 23.2.6: vector<bool>::reference should define operator=(const reference& x) as returning ``*this = bool(x)''. The default assignment operator is not adequate for this class. 23.3.1: classes map, multimap, set, and multiset fail to define member types iterator and const iterator properly. They are *not* synonyms for Allocator::types<value_type>:: pointer, etc. 23.3.1: map::operator[] return type should be Allocator::types<T>.reference, not T&. 23.3.1: map::operator[](const key_type&) const is an unapproved (and nonsensical) addition. It should be struck. 23.3.1.1: Much of the description of template classes map, multimap, set, and multiset have no semantics. These must be supplied. -------- Clause 24 Issues ------- 24.1.6: Class istreambuf_iterator should be declared with public base class input_iterator. There is then no need to add a special signature for iterator_category (which is missing from the <iterator> synopsis). 24.1.6: Template operator==(istreambuf_iterator) should not have default template parameters. 24.1.6: Template operator!=(istreambuf_iterator) is ambiguous in the presence of template operator!=(const T&, const T&). It should be struck. 24.1.6: Class ostreambuf_iterator should be declared with public base class output_iterator. There is then no need to list a special signature for iterator_category. 24.1.6: Template operator==(ostreambuf_iterator) and corresponding operator!= are nonsensical and unused. They should be struck. 24.2.6: Template function distance should have the requirement that last is reachable from first by incrementing first. 24.3.1: Paragraph 3 of description of reverse iterators has not been updated to reflect the addition of the Reference template parameter. 24.3.1.1: reverse_bidirectional_iterator::base and operator* should both be const member functions. 24.3.1.2.1: Effect of reverse_bidirectional_iterator() is not described. 24.3.1.2.5: reverse_bidirectional_iterator::operator--() doesn't say what it returns. Should say *this. 24.3.1.3: reverse_iterator::base and operator* should both be const member functions. 24.3.1.3: reverse_iterator has no semantics for operators +, +=, -, -=, [] 24.3.1.3: reverse_iterator::operator[] should be a const member function. 24.3.1.3: The four template operators defined for class reverse_iterator should be declared outside the class. 24.3.1.3: reverse_iterator has a Note that discusses reverse_bidirectional_iterator, for some reason. 24.3.1.4.1: reverse_iterator default constructor is not described. 24.3.1.4.5: reverse_iterator::operator--() doesn't say what it returns. Should say *this. 24.3.2.3: Template class front_insert_iterator should not have a Returns clause. 24.3.2.5: insert_iterator::operator++(int) returns a reference to *this, unlike in other classes. Otherwise, the update of iter by operator= gets lost. 24.3.2.6.5: Declaration for template function inserter is missing second template argument, class Iterator. It is also missing second function argument, of type Iterator. 24.4.3: istreambuf_iterator should have a member ``bool fail() const'' that returns true if any extractions from the controlled basic_streambuf fail. This is desperately needed by istream to restore its original approved functionality when these iterators are used with facet num_get. 24.4.3.1: istreambuf_iterator::proxy is not needed (once istreambuf_iterator is corrected as described below). It should be removed. 24.4.3.2: istreambuf_iterator(basic_istream s) should construct an end-of-stream iterator if s.rdbuf() is null. Otherwise, it should extract an element, as if by calling s->rdbuf()->sgetc(), and save it for future delivery by operator*(). (Lazy input, however, should be permitted.) 24.4.3.2: istreambuf_iterator(basic_streambuf *) has no description 24.4.3.2: istreambuf_iterator(const proxy&) should be removed. 24.4.3.3: istreambuf_iterator::operator*() should deliver a stored element, or call sgetc() on demand, then store the element. It should *not* extract a character, since this violates the input_iterator protocol. 24.4.3.4: istreambuf_iterator::operator++() Effects should say that it alters the stored element as if by calling s->snextc(), where s is the stored basic_streambuf pointer. 24.4.3.4: istreambuf_iterator::operator++(int) Effects should say that it saves a copy of *this, then calls operator++(), then returns the stored copy. Its return value should be istreambuf_iterator, not proxy. 24.4.3.8: template operator==(istreambuf_iterator&, istreambuf_iterator&) should have const operands. 24.4.3.8: template operator!=(istreambuf_iterator&, istreambuf_iterator&) should have const operands. It also is ambiguous in the presence of template<class T> operator!=(T, T) (as are many operators in the library). 24.4.4: ostreambuf_iterator::equal is silly, since output iterators cannot in general be compared. It should be struck. 24.4.4: ostreambuf_iterator should have a member ``bool fail() const'' that returns true if any insertions into the controlled basic_streambuf fail. This is desperately needed by ostream to restore its original approved functionality when these iterators are used with facet num_put. 24.4.4.1: ostreambuf_iterator::ostreambuf_iterator() produces a useless object. It should be struck. 24.4.4.1: ostreambuf_iterator;:ostreambuf_iterator(streambuf *) should require that s be not null, or define behavior if it is. 24.4.4.2: ostreambuf_iterator::equal is not needed and should be struck. 24.4.4.3: ostreambuf_iterator::operator== is silly, since output iterators cannot in general be compared. It should be struck. 24.4.4.3: ostreambuf_iterator::operator!= is silly, since output iterators cannot in general be compared. It should be struck. -------- Clause 25 Issues ------- 25: adjacent_find should take/return forward iterators, not input iterators, to return a truly useful value. 25: Template function search(FI, FI, Size, T [, Pred]) is irreparably ambiguous. Should be struck. (Similar versions have already been struck without approval.) 25: min/max_element should take/return forward iterators, not input iterators, to return a truly useful value. 25.1.1: for_each Notes: ``If f returns a result, the result is ignored.'' This is normative. 25.1.5: adjacent find should probably work with forward iterators, not input iterators. Otherwise, the return value is of only limited usefulness. (You can dereference it, but you can't increment it.) 25.1.5: adjacent_find Complexity refers to "value", whose meaning is left to the reader's imagination. Probably it means *i , the value at the returned iterator. Complexity: Exactly find(first, last, value) - first applications of the corresponding predicate. And surely the value is off-by-one? adjacent_find reports the first of the two matching values, so if that match is equal to the first iterator, then exactly 1 predicate has been performed? 25.1.9: Paragraph 5: The second set of signatures for search seems completely irreconcilable with the first set. There's no way to avoid ambiguities: anything that would match the four (or five) parameters of one would match the four (or five) parameters of the other. 25.1.9: search(FI, FI, Size, T [, Pred]) is hopelessly ambiguous. It should be struck. 25.2.4: replace_copy_if differs from replace_copy in some expected ways, but one difference is unexpected: first and last are just Iterators, instead of InputIterators. If this is intentional, add a note to explain why. 25.2.5: fill should take output iterators, just like fill_n. 25.2.8: unique should permit a worst-case complexity of last-first applications of the predicate. The HP implementation calls adjacent_find, then unique copy, to speed processing of lists with few or no duplicates. To permit this useful optimization, one comparison must be repeated. 25.3.3: Binary search: the discussion of non-random and random, logarithmic or linear, is completely unclear. 25.3.3: Binary search functions: The description must certainly imply that the sequence is ordered (increasing, according to the operator< or comp that is used) but this should be stated explicitly. Should the range of i be changed to: [first, last]? - Yes. 25.3.3.3: The description of equal_range doesn't sound like the sequence needs to be sorted, but it probably needs a clear statement like all the others in this section. 25.3.7: min_element and max_element should use forward iterators, not input iterators. Otherwise, the return value is useless. 25.3.8: lexocographical_compare: According to lib-3934, the complexity limit should be ... Complexity: At most 2 * min((last1 - first1), (last2 - first2)) | applications of the corresponding comparison. 25.4: Note for <cstdlib> says that the qsort function argument ``shall have extern "C" linkage.'' I recall some discussion of this possibility, but no resolution to that effect. Note also says that qsort ``is allowed to propagate'' an exception, but evidently not required to do so. In the same context, bsearch is mentioned, but it is not given a similar constraint. Needs to be clarified. -------- Clause 26 Issues ------- 26.2: operator!= with two complex operands is ambiguous in the presence of template operator!=(T, T). 26.2: operator== and operator!= for complex<T> should return bool, not complex<T>. 26.2: Complex functions acos, asin, atan, and atan2 are an unapproved addition. All should be struck. 26.2: Complex function log10 is an unapproved addition. It should be struck. 26.2: Complex functions tan and tanh are an unapproved addition. Both should be struck. 26.2: The macro __STD_COMPLEX has been omitted from <complex>. This is an unapproved change. It should be restored. 26.2.1: The three constructors complex(), complex(T), and complex(T, T) are later shown as complex(T = 0, T = 0). The latter notation should also be used in the template definition. 26.2.1: Template class complex needs to clarify what the constraints on the type T for which it can be instantiated. It looks like you must be able to perform arbitrary arithmetic mixed with floating-point operands. It suggests that T() behaves like a numeric zero. But nothing is said. Nor is it clear how an implementation is expected to compute the hyperbolic cosine of an arbitrary type T. More words are desperately needed here. The simplest reasonable constraint is that T is a type that defines conversions to and from one of the floating-point types -- double for best performance on average, long double for maximum precision. 26.2.3: The "template<class T>" shouldn't appear on the declaration: template<class T> complex(T re = T(), T im = T()); (Alternatively, the qualifier complex:: should be added, but this departs from earlier style. Same comments apply for later members in 26.2.4. 26.2.5: complex operator != Returns is curdled. Should be !(lhs == rhs). 26.2.6: Template arg(complex) Returns should say ``the phase angle of x, or atan2(imag(x), real(x)).'' 26.2.6: Template conjg(complex) Returns should say ``the complex conjugage of x, complex(real(x), -imag(x)).'' 26.2.6: template<class T> complex<T> polar(T rho, const t& theta); What is t in const t&? Should probably be T. 26.2.7: complex acos, asin, atan, and atan2 are unapproved addition. They should be struck. 26.2.7: complex log10 is an unapproved addition. It should be struck. 26.2.7: complex tan and tanh are unapproved addition. They should be struck. 26.3: Description of class slice should explain what ``BLAS-like'' means. 26.3: valarray operator&& and operator|| overloads should probably all return valarray<bool>, not valarray<T>. 26.3.1: valarray::operator T*() and valarray::operator const T*() cause numerous ambiguities. Should be omitted, or replaced by: T *begin(); const T *begin() const; 26.3: Paragraph 2 is unnecessary, since an implementation is permitted to make *any* function inline. It should be struck. 26.3.1.1: Much of valarray() Effects is non-normative commentary. Should be moved to Notes. 26.3.1.1: ~valarray() has no stated semantics. 26.3.1.2: First line: valarra<T>y should be valarray<T>. 26.3.1.2: valarray::operator= has non-normative text in Effects: ``Assignment is the usual way to change the length of an array after initialization.'' 26.3.1.3: Para 4. This sentence should be a non-normative note: ``This property indicates an absence of aliasing and may be used to advantage by optimizing compilers.202)'' 26.3.1.7: valarray::operator T*() and valarray::operator const T*() cause numerous ambiguities. Should be omitted, or replaced by: T *begin(); const T *begin() const; 26.3.2.3: Description of valarray::min/max is unclear on which of these functions uses operator< and which operator>. The STL convention is to require only operator< for both. The description should be at least clarified, if not made simpler and more consistent. 26.3.4: The behavior of valarray<T>::operator[](slice) is described only in an example, which is non-normative. 26.3.5: The behavior of valarray<T>::operator[](gslice) is described only in an example, which is non-normative. 26.3.5.1: Declarations of gslice constructors are duplicated. 26.3.8: An indirect_array is produced by subscripting valarray with a const valarray<size_t>, not a const valarray<int>. (See definition of valarray.) 26.4.3: Template function partial_sum does not ``assign to every iterator i...'' It ``assigns to *i for every iterator i...'' It also should clarify that the first element of the sequence is simply *first. 26.5: Table 64 omits tan and tanh for no apparent reason. -------- Clause 27 Issues ------- 27.1.1: The definition of ``character'' is inadequate. It should say that it is a type that doesn't need to be constructed or destroyed, and that a bitwise copy of it preserves its value and semantics. It should also say that it can't be any of the builtin types for which conflicting inserters are defined in ostream or extractors are defined in istream. 27.1.2.4: Description of type POS_T contains many awkward phrases. Needs rewriting for clarity. 27.1.2.4: Paragraph 2 has ``alg'' instead of ``all.'' 27.1.2.4: Footnote 207 should say ``for one of'' instead of ``for one if.'' Also, it should``whose representation has at least'' instead of ``whose representation at least.'' 27.2: Forward declarations for template classes basic_ios, basic_istream, and basic_ostream should have two class parameters, not one. It is equally dicey to define ios, istream, etc. by writing just one parameter for the defining classes. All should have the second parameter supplied, which suggests the need for a forward reference to template class ios_char_traits as well, or at least the two usual specializations of that class. 27.3: <iostream> is required to include <fstream>, but it contains no overt references to that header. 27.3.1: cin.tie() returns &cout, not cout. 27.3.2: win.tie() returns &cout, not cout. 27.4: streamsize is shown as having type INT_T, but subclause 27.1.2.2 says this is the integer form of a character (such as int/wint_t). streamsize really must be a synonym for int or long, to satisfy all constraints imposed on it. (See Footnote 211.) 27.4: Synopsis of <ios> is missing streampos and wstreampos. (They appear in later detailed semantics.) Should be added. 27.4: Synopsis of <ios> has the declaration: template <class charT> struct ios_traits<charT>; The trailing <charT> should be struck. 27.4.1: Type wstreamoff seems to have no specific use. It should be struck. 27.4.1: Type wstreampos seems to have no specific use. It should be struck. 27.4.2: ios_traits::state_type is listed as ``to be specified.'' It needs to be specified. 27.4.2: Definition of ios_traits lists arguments backwards for is_whitespace. Should have const ctype<char_type>& second, as in later description. (Also, first argument should be int_type, as discussed in 27.4.2.3.) 27.4.2: ios_traits description should make clear whether user specialization is permitted. If it isn't, then various operations in <locale> and string_char_traits are rather restrictive. If it is, then the draft should be clear that ios_traits<char> and ios_traits<wchar_t> cannot be displaced by a user definition. 27.4.2: The draft now says ``an implementation shall provide'' instantiations of ios_traits<char> and ios_traits<wchar_t>. It was changed without approval from ``an implementation may provide.'' This change directly contradicts Nov 94 Resolution 23. The proper wording should be restored. 27.4.2.1: ios_traits::state_type is defined in the wrong place. basic_streambuf should be permitted to define this type, since it knows best what it needs. Having it in ios_traits is close to useless. 27.4.2.2: ios_traits::not_eof should take an argument of int_type, not char_type. 27.4.2.2: ios_traits::not_eof says nothing about the use made of its argument c. Should say that it returns c unless it can be mistaken for an eof(). 27.4.2.2: ios_traits::not_eof has two Returns clauses. The second is an overspecificationaand shuuld be struck. 27.4.2.2: ios_traits::length has an Effects clause but no Returns clause. The Effects clause should be reworded as a Returns clause. 27.4.2.3: First argument to is_whitespace has been changed from int_type to char_type with no enabling resolution. It is also a bad idea. Should be restored to int_type. 27.4.2.3: is_whitespace supposedly behaves ``as if it returns ctype.isspace(c),'' but that function doesn't eiist. Should say ``as if it returns ctype.is(ctype_base::space, c).'' Even better, the function should be dropped, letting basic_istream call the obvious function directly. 27.4.2.3: The draft now says that ios_traits functions to_char_type, to_int_type, and copy are ``provided from the base struct string_char_traits<CHAR-T>.'' This is a substantive change made without approval. It is also nonsensical, since there is no such ``base struct.'' The wording should be struck. 27.4.2.4: ios_traits::to_char_type has an Effects clause which should be reworded as a Returns clause. 27.4.2.4: ios_traits::to_int_type has an Effects clause which should be reworded as a Returns clause. 27.4.2.4: ios_traits::copy has an Effects clause which should be reworded as a Returns clause. (It returns src.) 27.4.2.4: ios_traits::get_state should be specified to do more than return zero. Semantics are inadequate. A pos_type conceptually has three components: an off_type (streamsize), an fpos_t, and a state_type (mbstate_t, which may be part of fpos_t). It must be possible to compose a pos_type from these elements, in various combinations, and to decompose them into their three parts. As with state_type (27.4.2.1), this doesn't belong in ios_traits. Should be removed, along with get_pos. 27.4.2.4: ios_traits::get_pos should be specified to do more than return pos_type(pos). Semantics are inadequate. See comments on get_state. above. 27.4.3: ios_base::fill() cannot return int_type because it's not defined. Should be int if fill() is left in ios_base. 27.4.3: ios_base::precision() and width() should deal in streamsize arguments and return values, not int. (Even more precisely, they should be moved to basic_ios and have all their types changed to traits::streamoff.) 27.4.3.1.6: ~Init() should call flush() for wout, werr, and wlog, not just for cout, cerr, and clog. 27.4.3.2: ios_baee::fill(int_type) cannot receive or return int_type because it's not defined. Both should be int if fill(int_type) is left in ios_base. 27.4.3.4: ios_base::iword allocates an array of long, not of int. 27.4.3.4: ios_base::iword Notes describes a normative limitation on the lifetime of a returned reference. It should not be in a Notes clause. It should also say that the reference becomes invalid after a copyfmt, or when the ios_base object is destroyed. 27.4.3.4: ios_base::pword Notes describes a normative limitation on the lifetime of a returned reference. It should not be in a Notes clause. It should also say that the reference becomes invalid after a copyfmt, or when the ios_base object is destroyed. 27.4.3.5: Protected constructor ios_base::ios_base() must *not* assign initial values to its member objects as indicated in Table 72. That operation must be deferred until basic_ios::init is called. Should say here that it does no initialization, then move Table 72 to description of basic_ios::init (27.4.4.1). Also should emphasize that the object *must* be initialized before it is destroyed (thanks to reference counting of locale objects). 27.4.3.5: Table 72 shows result of rdstate() for a newly constructed ios_base object, but that object defines no such member function. (Will be fixed if table is moved to basic_ios, as proposed.) 27.4.4.1: basic_ios::basic_ios() has next to no semantics. Needs to be specified: fffects: Constructs an object of class basic_ios, leaving its member objects uninitialized. The object *must* be initialized by calling init(basic_streambuf *sb) before it is destroyed. 27.4.4.1: basic_ios::init(basic_streambuf *sb) has no semantics. Needs to be specified: Postconditions: rdbuf() == sb, tie() == 0, ios_base initialized according to Table 72 (currently in 27.4.3.5). 27.4.4.2: basic_ios::tie is not necessarily synchronized with an *input* sequence. Can also be used with an output sequence. 27.4.4.2: basic_ios::imbue(const locale&) should call rdbuf()->pubimbue(loc) only if rdbuf() is not a null pointer. Even better, it should not call rdbuf()->pubimbue(loc) at all. Changing the locale that controls stream conversions is best decoupled from changing the locale that affects numeric formatting, etc. Anyone who knows how to imbue a proper pair of codecvt facets in a streambuf won't mind having to make an explicit call. 27.4.4.2: basic_ios::imbue(const locale&) doesn't specify what value it returns. Should say it returns whatever ios_base::imbue(loc) returns. 27.4.4.2: basic_ios::copyfmt should say that both rdbuf() and rdstate() are left unchanged, not just the latter. 27.5.2: basic_streambuf::sgetn should return streamsize, not int_type 27.5.2: basic_streambuf::sungetc should return int_type, not int 27.5.2: basic_streambuf::sputc should return int_type, not int 27.5.2: basic_streambuf::sputn should return streamsize, not int_type 27.5.2.2.3: In in_avail Returns: gend() should be egptr() and gnext() should be gptr(). 27.5.2.2.3: basic_streambuf::sbumpc Returns should not say the function converts *gptr() to char_type. The function returns the int_type result of the call. 27.5.2.2.3: basic_streambuf::sgetc Returns should not say the function converts *gptr() to char_type. The function returns the int_type result of the call. 27.5.2.2.3: basic_streambuf::sgetn should return streamsize, not int. 27.5.2.2.4: basic_streambuf::sungetc should return int_type, not int. 27.5.2.2.4: basic_streambuf::sputc should return int_type, not int. 27.5.2.2.5: basic_streambuf::sputc does not return *pptr(), which points at storage with undefined content. It returns traits::to_int_type(c). 27.5.2.4.2: basic_streambuf::sync now requires that buffered input characters ``are restored to the input sequence.'' This is a change made without approval. It is also difficult, or even impossible, to do so for input streams on some systems, particularly for interactive or pipelined input. The Standard C equivalent of sync leaves input alone. Posix *discards* interactive input. This added requirement is none of the above. It should be struck. 27.5.2.4.3: basic_streambuf::showmanyc Returns has been corrupted. The function should return the number of characters that can be read with no fear of an indefinite wait while underflow obtains more characters from the input sequence. traits::eof() is only part of the story. Needs to be restored to the approved intent. (See footnote 218.) 27.5.2.4.3: basic_streambuf::showmanyc Notes says the function uses traits::eof(). Not necessarily true. 27.5.2.4.3: Footnote 217 is nonsense unless showmany is corrected to showmanyc. 27.5.2.4.3: basic_streambuf::underflow has two Returns clauses. Should combine them to be comprehensive. 27.5.2.4.3: basic_streambuf::uflow default behavior ``does'' gbump(1), not gbump(-1). It also returns the value of *gptr() *before* ``doing'' gbump. 27.5.2.4.3: basic_streambuf::uflow has a nonsense Returns clause. Should be struck. 27.5.2.4.4: basic_streambuf::pbackfail argument should be int_type, not int. 27.5.2.4.4: basic_streambuf::pbackfail Notes begins a sentence with ``Other calls shall.'' Can't apply ``shall'' to user program behavior, by the accepted conformance model. 27.6: <iomanip> synopsis has includes for <istream> and <ostream>, but none of the declarations appear to depend on either of these headers. They should be replaced by an include for <ios>. 27.6.1.1: basic_istream::seekg(pos_type&) and basic_istream::seekg(off_type&, ios_base::seekdir) should both have const first parameters. 27.6.1.1: basic_istream paragraph 2 says extractors may call rdbuf()->sbumpc(), rdbuf()->sgetc(), or ``other public members of istream except that they do not invoke any virtual members of rdbuf() except uflow().'' This is a constraint that was never approved. Besides, rdbuf()->sgetc() invokes underflow(), as does uflow() itself, and the example of ipfx in 27.6.1.1.2 uses rdbuf()->sputbackc(). The added constraint should be struck. 27.6.1.1: basic_istream definition, paragraph 4 is confusing, particularly in the light of similar errors in 27.6.2.1 and 27.6.2.4.2 (basic_ostream). It says, ``If one of these called functions throws an exception, then unless explicitly noted otherwise the input function calls setstate(badbit) and if badbit is on in exception() rethrows the exception without completing its actions.'' But the setstate(badbit) call may well throw an exception itself, as is repeatedly pointed out throughout the draft. In that case, it will not return control to the exception handler in the input function. So it is foolish to test whether badbit is set -- it can't possibly be. Besides, I can find no committee resolution that calls for exceptions() to be queried in this event. An alternate reading of this vague sentence implies that setstate should rethrow the exception, rather than throw ios_base::failure, as is its custom. But the interface to setstate provides no way to indicate that such a rethrow should occur, so these putative semantics cannot be implemented. The fix is to alter the ending of the sentence to read, ``and if setstate returns, the function rethrows the exception without completing its actions.'' (It is another matter to clarify what is meant by ``completing its actions.'') 27.6.1.1.2: basic_istream::ipfx Notes says the second argument to traits:: is_whitespace is ``const locale *''. The example that immediately follows makes clear that it should be ``const ctype<charT>&''. 27.6.1.1.2: Footnote 222 makes an apparently normative statement in a non-normative context. 27.6.1.2.1: basic_istream description is silent on how void* is converted. Can an implementation use num_get<charT>::get for one of the integer types? Must it *not* use this facet? Is a version of get missing in the facet? Needs to be clarified. 7766.1.2.1: Example of call to num_get<charT>::get has nonsense for first two arguments. Instead of ``(*this, 0, '' they should be be ``(istreambuf_iterator<charT>(rdbuf()), istreambuf_iterator<charT>(0), '' 27.6.1.2.1: Example of numeric input conversion says ``the conversion occurs `as if' it performed the following code fragment.'' But that fragment contains the test ``(TYPE)tmp != tmp'' which often has undefined behavior for any value of tmp that might yield a true result. The test should be replaced by a metastatement such as ``<tmp can be safely converted to TYPE>''. (Then num_get needs a version of get for extracting type float to make it possible to write num_get in portable C++ code.) 27.6.1.2.1: Paragraph 4, last sentence doesn't make sense. Perhaps ``since the flexibility it has been...'' should be, ``since for flexibility it has been...'' But I'm not certain. Subsequent sentences are even more mysterious. 27.6.1.2.1: Use of num_get facets to extract numeric input leaves very unclear how streambuf exceptions are caught and properly reported. 22.2.2.1.2 makes clear that the num_get::get virtuals call setstate, and hence can throw exceptions *that should not be caught* within any of the input functions. (Doing so typically causes the input function to call setstate(badbit), which is *not* called for as part of reporting eof or scan failure. On the other hand, the num_get::get virtuals are called with istreambuf_iterator arguments, whose very constructors might throw exceptions that need to be caught. And the description of the num_get::get virtuals is silent on the handling of streambuf exceptions. So it seems imperative that the input functions wrap each call to a num_get::get function in a try block, but doing so will intercept any exceptions thrown by setstate calls within the num_get::get functions. A related problem occurs when eofbit is on in exceptions and the program attempts to extract a short at the very end of the file. If num_get::get(..., long) calls setstate, the failure exception will be thrown before the long value is converted and stored in the short object, which is *not* the approved behavior. The best fix I can think of is to have the num_get::get functions return an ios_base::iostate mask which specifies what errors the caller should report to setstate. The ios& argument could be a copy of the actual ios for the stream, but with exceptions cleared. The num_get::get functions can then continue to call setstate directly with no fear of throwing an exception. But all this is getting very messy for such a time critical operation as numeric input. 27.6.1.2.2: basic_istream::operator>>(char_type *) extracts an upper limit of numeric_limits<int>::max() ``characters.'' This is a silly and arbitrary number, just like its predecessor INT_MAX for characters of type char. A more sensible value is size_t(-1) / sizeof (char_type) - 1. Could just say ``the size of the largest array of char_type that can also store the terminating null.'' basic_istream::operator>>(bool&) has nonsense for its first two arguments. Should be istreambuf_iterator<charT, traits>(rdbuf()), istreambuf_iterator<charT, traits>(0), etc. 27.6.1.2.2: basic_istream::(bool& paragraph 3 describes the behavior of num_get::get. Description belongs in clause 22. 27.6.1.2.2: basic_istream::operator>>(unsigned short&) cannot properly check negated inputs. The C Standard is clear that -1 is a valid field, yielding 0xffff (for 16-bit shorts). It is equally clear that 0xffffffff is invalid. But num_get::get(... unsigned long&) delivers the same bit pattern for both fields (for 32-bit longs), with no way to check the origin. One fix is to have the extractor for unsigned short (and possibly for unsigned int) pick off any '-' flag and do the checking and negating properly, but that precludes a user-supplied replacement for the num_get facet from doing some other magic. Either the checking rules must be weakened over those for Standard C, the interface to num_get must be broadened, or the extractor must be permitted to do its own negation. 27.6.1.2.2: basic_istream::operator>>(basic_streambuf *sb) now says, ``If sb is null, calls setstate(badbit).'' This requirement was added without committee approval. It is also inconsistent with the widespread convention that badbit should report loss of integrity of the stream proper (not some other stream). A null sb should set failbit. 27.6.1.2.2: basic_istream::operator>>(basic_streambuf<charT,traits>* sb), last line of Effects paragraph 4 can't happen. Previous sentence says, ``If the function inserts no characters, it calls setstate(failbit), which may throw ios_base failure. Then the last sentence says, ``If failure was due to catching an exception thrown while extracting characters from sb and failbit is on in exceptions(), then the caught exception is rethrown.'' But in this case, setstate has already thrown ios_base::failure. Besides, I can find no committee resolution that calls for exceptions() to be queried in this event. In fact, the approved behavior was simply to terminate the copy operation if an extractor throws an exception, just as for get(basic_streambuf&) in 27.6.1.3. Last sentence should be struck. 27.6.1.3: basic_istream::get(basic_streambuf& sb) Effects says it inserts characters ``in the output sequence controlled by rdbuf().'' Should be the sequence controlled by sb. 27.6.1.3: basic_istream::readsome refers several times to in_avail(), which is not defined in the class. All references should be to rdbuf()->in_avail(). And the description should specify what happens when rdbuf() is a null pointer. (Presumably sets badbit.) 27.6.1.3: basic_istream::readsome is now defined for rdbuf()->in_avail() < 0. The original proposal defined only the special value -1. Otherwise, it requires that rdbuf()->in_avail >= 0. Should be restored. 27.6.1.3: basic_istream::readsome cannot return read, as stated. That function has the wrong return type. Should return gcount(). 27.6.1.3: basic_istream::putback does *not* call ``rdbuf->sputbackc(c)''. It calls ``rdbuf()->sputbackc(c)'' and then only if rdbuf() is not null. 27.6.1.3: basic_istream::unget does *not* call ``rdbuf->sungetc(c)''. It calls ``rdbuf()->sungetc(c)'' and then only if rdbuf() is not null. 27.6.1.3: basic_istream::sync describes what happens when rdbuf()->pubsync() returns traits::eof(), but that can't happen in general because pubsync returns an int, not an int_type. This is an unauthorized, and ill-advised, change from the original EOF. Return value should also be EOF. 27.6.1.3: basic_istream::sync Notes says the function uses traits::eof(). Obviously it doesn't, as described above. Clause should be struck. 27.6.2.1: basic_ostream::seekp(pos_type&) and basic_ostream::seekp(off_type&, ios_base::seekdir) should both have const first parameters. 27.6.2.1: basic_ostream definition, last line of paragraph 2 can't happen. It says, ``If the called function throws an exception, the output function calls setstate(badbit), which may throw ios_base::failure, and if badbit is on in exceptions() rethrows the exception.'' But in this case, setstate has already thrown ios_base::failure. Besides, I can find no committee resolution that calls for exceptions() to be queried in this event. Last sentence should end with, ``and if setstate returns, the function rethrows the exception.'' 27.6.1.2.1: Use of num_put facets to insert numeric output leaves very unclear how output failure is reported. Only the ostreambuf_iterator knows when such a failure occurs. If it throws an exception, the calling code in basic_ostreambuf is obliged to call setstate(badbit) and rethrow the exception, which is *not* the approved behavior for failure of a streambuf primitive. Possible fixes are: have ostreambuf_iterator report a specific type of exception, have ostreambuf_iterator remember a failure for later testing, or give up on restoring past behavior. Something *must* be done in this area, however. e7.6.2.4.1: Tabie 76 is mistitled. It is not just about floating-point conversions. 27.6.2.4.1: Table 77 has an unauthorized change of rules for determining fill padding. It gives the three defined states of flags() & adjustfield as left, internal, and otherwise. It should be right, internal, and otherwise. Needs to be restored to the earlier (approved) logic. 27.6.2.4.2: basic_ostream<<operator<<(bool) should use ostreambuf_iterator, not istreambuf_iterator. The first argument is also wrong in the call to num_put::put. 27.6.2.4.2: basic_ostream::operator<<(basic_streambuf *sb) says nothing about sb being null, unlike the corresponding extractor (27.6.1.2.2). Should either leave both undefined or say both set failbit. 27.6.2.4: basic_ostream::operator<<(streambuf *) says nothing about the failure indication when ``inserting in the output sequence fails''. Should say the function sets badbit. 27.6.2.4.2: basic_ostream::operator<<(basic_streambuf<charT,traits>* sb), last line of Effects paragraph 2 can't happen. Previous sentence says that if ``an exception was thrown while extracting a character, it calls setstate(failbit) (which may throw ios_base::failure).'' Then the last sentence says, ``If an exception was thrown while extracting a character and failbit is on in exceptions() the caught exception is rethrown.'' But in this case, setstate has already thrown ios_base::failure. Besides, I can find no committee resolution that calls for exceptions() to be queried in this event. And an earlier sentence says unconditionally that the exception is rethrown. Last sentence should be struck. 27.6.2.5: basic_ostream::flush can't test for a return of traits::eof() from basic_streambuf::pubsync. It tests for EOF. 27.6.3: ``headir'' should be ``header.'' 27.6.7: <sstream> synopsis refers to the nonsense class int_charT_traits. It should be ios_traits. 27.7: Table 77 (<cstdlib> synopsis) is out of place in the middle of the presentation of <sstream>. 27.7.1: basic_stringbuf::basic_stringbuf(basic_string, openmode) Effects repeats the phrase ``initializing the base class with basic_streambuf().'' Strike the repetition. 27.7.1: basic_stringbuf::basic_stringbuf(basic_string, openmode) Postconditions requires that str() == str. This is true only if which has in set. Condition should be restated. 27.7.1: Table 78 describes calls to setg and setp with string arguments, for which no signature exists. Needs to be recast. 27.7.1: basic_stringbuf::str(basic_string s) Postconditions requires that str() == s. This is true only if which had in set at construction time. Condition should be restated. 27.7.1.2: Table 80 describes calls to setg and setp with string arguments, for which no signature exists. Needs to be recast. 27.7.1.3: basic_stringbuf::underflow Returns should return int_type(*gptr()), not char_type(*gptr()). 27.7.1.3: basic_stringbuf::pbackfail returns c (which is int_type) in first case, char_type(c) in second case. Both cases should be c. 27.7.1.3: basic_stringbuf::pbackfail supposedly returns c when c == eof(). Should return traits::not_eof(c). 27.7.1.3: basic_stringbuf::seekpos paragraph 4 has ``positionedif'' run together. 27.8.1.1: basic_filebuf paragraph 3 talks about a file being ``open for reading or for update,'' and later ``open for writing or for update.'' But ``open for update'' is not a defined term. Should be struck in both cases. 27.8.1.3: basic_filebuf::is_open allegedly tests whether ``the associated file is available and open.'' No definition exists for ``available.'' The term should be struck. 27.8.1.3: basic_filebuf::open Effects says the function fails if is_open() is initially false. Should be if initially true. 27.8.1.3: basic_filebuf::open Effects says the function calls the default constructor for basic_streambuf. This is nonsense. Should say, at most, that it initializes the basic_filebuf as needed, and then only after it succeeds in opening the file. 27.8.1.3: Table 83 has a duplicate entry for file open mode ``in | out''. 27.8.1.4: filebuf::showmanyc (and several overriden virtual functions that follow) have a Requires clause that says ``is_open == true.'' The behavior of all these functions should be well defined in that event, however. Typically, the functions all fail. The Requires clause should be struck in all cases. 27.8.1.4: filebuf::showmanyc Effects says the function ``behaves the same as basic_streambuf::showmanyc.'' The description adds nothing and should be struck. 27.8.1.4: basic_filebuf::underflow effects shows arguments to convert as ``st,from_buf,from_buf+FSIZE,from_end,to_buf, to_buf+to_size, to_end''. st should be declared as an object of type state_type, and n should be defined as the number of characters read into from_buf. Then the arguments should be ``st, from_buf, from_buf + n, from_end, to_buf, to_buf + TSIZE, to_end''. Also, template parameter should be ``traits::state_type,'' not ``ios_traits::state_type.'' 27.8.1.4: basic_filebuf::underflow is defined unequivocally as the function that calls codecvt, but there are performance advantages to having this conversion actually performed in uflow. If the specification cannot be broadened sufficiently to allow either function to do the translation, then uflow loses its last rationale for being added in the first place. Either the extra latitude should be granted implementors or uflow should be removed from basic_streambuf and all its derivatives. 27.8.1.4: basic_filebuf::pbackfail(traits::eof()) used to return a value other than eof() if the function succeeded in backing up the input. Now the relevant Returns clause says the function returns the metacharacter c, which is indistinguishable from a failure return. This is an unapproved change. Should probably say the function returns traits::not_eof(c). 27.8.1.4: basic_filebuf::pbackfail Notes now says ``if is_open() is false, the function always fails.'' This is an unapproved change. The older wording should be restored. 27.8.1.4: basic_filebuf::pbackfail Notes now says ``the function does not put back a character directly to the input sequence.'' This is an unapproved change and not in keeping with widespread practice. The older wording should be restored. 27.8.1.4: basic_filebuf::pbackfail has a Default behavior clause. Should be struck. 27.8.1.4: basic_filebuf::overflow effects shows arguments to convert as ``st,b(),p(),end,buf,buf+BSIZE,ebuf''. st should be declared as an object of type state_type. Then the arguments should be ``st, b, p, end, buf, buf + BSIZE, ebuf''. Also, template parameter should be ``traits::state_type,'' not ``ios_traits::state_type.'' 27.8.1.4: basic_filebuf::overflow doesn't say what it returns on success. Should say it returns c. 27.8.1.4: basic_filebuf::setbuf has no semantics. Needs to be supplied. 27.8.1.4: basic_filebuf::seekoff Effects is an interesting exercise in creative writing. It should simply state that if the stream is opened as a text file or has state-dependent conversions, the only permissible seeks are with zero offset relative to the beginning or current position of the file. (How to determine that predicate is another matter -- should state for codecvt that even a request to convert zero characters will return noconv.) Otherwise, behavior is largely the same as for basic_stringstream, from whence the words should be cribbed. The problem of saving the stream state in a traits::pos_type object remains unsolved. The primitives described for ios_traits are inadequate. 27.8.1.4: basic_filebuf::seekpos has no semantics. Needs to be supplied. 27.8.1.4: basic_filebuf::sync has no semantics. Needs to be supplied. 27.8.1.4: basic_filebuf::imbue has silly semantics. Whether or not sync() succeeds has little bearing on whether you can safely change the working codecvt facet. The most sensible thing is to establish this facet at construction. (Then pubimbue and imbue can be scrubbed completely.) Next best is while is_open() is false. (Then imbue can be scrubbed, since it has nothing to do.) Next best is to permit any imbue that doesn't change the facet or is at beginning of file. Next best is to permit change of facet any time provided either the current or new facet does not mandate state-dependent conversions. (See comments under seekoff.) 27.8.1.7: basic_filebuf::rdbuf should not have explicit qualifier. 27.8.1.9: basic_ofstream::basic_ofstream(const char *s, openmode mode = out) has wrong default second argument. It should be `out | trunc', the same as for basic_ofstream::open (in the definition at least). 27.8.1.10: basic_ofstream::open(const char *s, openmode mode = out) has wrong default second argument. It should be `out | trunc', the same as for basic_ofstream::open in the definition. 27.8.2: <cstdio> synopsis has two copies of tmpfile and vprintf, no vfprintf or putchar. 27.8.2: <cwchar> summary should also list the type wchar_t. Aside from the addition of the (incomplete) type struct tm, this table 84 is identical to table 44 in 21.2. It is not clear what purpose either table serves; it is less clear what purpose is served by repeating the table. 27.8.2: See Also reference for <wchar> should be 7.13.2, not 4.6.2. -------- Annex D Issues ------- D.2: Functions overloaded on io_state, open_mode, and seek_dir ``call the corresponding member function.'' But no hint is given as to what constitutes ``correspondence.'' D.3.1: Base class for class strstreambuf should be basic_streambuf<char>, or streambuf, not streambuf<char>. D.3.1: strstreambuf::freeze default bool argument should be true, not 1. D.3.1: strstreambuf::pcount should return streamsize, not int D.3.11: strstreambuf::strstreambuf(char *, streamsize, char *) paragraph 3 has wrong argument for setp. It should be: setp(pbeg_arg, gbeg_arg + N). (This is an ancient typo -- clearly wrong because it violates the stated constraints on the size of the working array.) D.3.1.2: strstreambuf::freeze default value should be true, not 1. D.3.1.3: strstreambuf::overflow, etc. still have references to int_type, ios_traits<char>, etc. Should be reconciled with synopsis. D.3.1.3: strstreambuf::overflow has numerous references to ``eof()'', which no longer exists. All should be changed to EOF. D.3.1.3: strstreambuf::overflow says it returns ``(char)c'' sometimes, but this can pun with EOF if char has a signed representation. More accurate to say it returns (unsigned char)c. D.3.1.3: strstreambuf::pbackfail says it returns ``(char)c'' sometimes, but this can pun with EOF if char has a signed representation. More accurate to say it returns (unsigned char)c. D.3.1.3: strstreambuf::pbackfail says it returns ``(char)c'' when c == EOF, but this can pun with EOF if char has a signed representation. More accurate to say it returns something other than EOF. D.3.1.3: Default arguments to strstreambuf::seekoff third argument must be qualified with ios::. D.3.1.3: strstreambuf::seekoff should position both streams even for way == ios::cur. It then uses the get pointer as a starting point. D.3.1.3: The ``otherwise'' at the beginning of paragraph 8 is a nonsequitir. D.3.1.3: strstreambuf::setbuf has a Default behavior clause, which is not appropriate for a derived stream buffer. It also adds nothing to the definition in the base class. The entire description should be struck. D.3.2: Base class for class istrstream should be basic_istream<char>, not istream<char>. D.3.3: Base class for class ostrstream should be basic_ostream<char>, not ostream<char>. D.3.3: ostrstream::ostrstream(char *, int, openmode) should have second argument type of streamsize. D.3.3: ostrstream::freeze should have argument ``bool freezefl = true'' instead of ``int freezefl = true''. | http://www.open-std.org/jtc1/sc22/wg21/docs/papers/1995/N0795.htm | CC-MAIN-2018-34 | refinedweb | 14,583 | 59.3 |
transfer systemtony.lin Aug 31, 2013 10:50 AM
Hi
I have upgrade from ePO 4.5.6 to ePO 5.0.1, right I have setup ePO 5.0.1 ready. I need to transfer old Mcafee Agent to new system. I use registered server to do it, when enable transfer system option. I got follow messeg: "Master agent-server key(s) must be imported into the remote server prior to importing the sitelist. Go to Server Settings to export security keys from this server. Note that visiting this link now will cause you to lose any unsaved changes to this registered server." I have put old ePO server and McAfee Agent both keys into new ePO system.
1. Re: transfer systemmbauman8 Sep 29, 2013 6:11 AM (in response to tony.lin)
hi
make the same on the old epo
import all keys from the new epo
ignore the error message of missmutching epo version
then it works
2. Re: transfer systemrgc Sep 30, 2013 12:09 AM (in response to mbauman8)
Hi Tony,
Please follow the step by step procedure to transfer agent from one eppo to another as below.
How to Transfer/Move computers from one ePO server to another
Environment: McAfee ePolicy Orchestrator 5.x, 4.6, 4.5
The following is the ePO Server identification for the computer transfer example below:
Server A = Old ePO 4. x /5.x
Server B = New ePO 5. x
In the following steps, we will transfer managed computers from Server A to Server B.
Export the security keys from Server B and import to Server A.
NOTE: Only ASCI keys are required. Only two keys need to be exported and imported, 2048 and 1024.
Log on to the ePO 5.x console.
Click Menu, Configuration, Server Settings.
Click Security Keys under Setting Categories column and click Edit on right pane at bottom of page.
Do the following steps for both of the 2048 bits and 1024 bits keys listed under the Agent-server secure communication keys:
Click the key identified as 2048 bits and click Export.
Click OK to the export key confirmation message.
Click Save, enter or browse to a path to save the security key .ZIP file to and click Save again.
Export: Open the EPO 5. x console ==Menu = Server settings == Security keys == Edit == click on export by highlight the key ( 2048) and then same steps for 1024 keys.
Import: Open the EPO 4.6 console == Menu == Configuration == Server settings == Security Keys == Edit == Click on import and browse the zip file of the keys exported from EPO B.C..
Once the keys export and import process completed, register the Server B (ePO 5. x) in to Server A (ePO 4. x).
From Server A, log on to the ePO 4.x console
Click Menu, Configuration, Registered Servers.
Click New server = Select EPO = Name it == Next == Type the credentials to reach the server B (EPO DB) = and click on test connection ( If it is successful) scroll down ==
>> Enable the systems Transfer with " Automatic sitelist " and then save.
Once the keys import and ePO registered.
>> The Server A allows to select the option for transfer.
Open the system tree - select machine for transfer == Actions == Agent = Transfer Agents = Select the Server B (ePO 5. x) and okay to transfer.
NOTE: Make sure, the selected machine is communicating to Server A ePO before transfer.
Check the status of transfer computers after 2 ASCI triggers.
Once the process completed you will see the agent listed from Server B (ePO 5. x ) and from Server A system tree it will disappear.
>> For confirmation, send the wake up call from Server B (ePO 5. x ) and then confirm computer is communicating.
Follow the steps and share a update, if you find issues.
>> Attached the document with screenshots for the steps to perform.
Regards,
RGC
3. Re: transfer systemtony.lin Oct 1, 2013 11:48 PM (in response to rgc)
Hi rgc
Thanks for help, I will try it. | https://community.mcafee.com/thread/59551 | CC-MAIN-2018-05 | refinedweb | 666 | 75.4 |
I'm trying to write a code that calculates a cell phone bill. There are two different services: a regular service that is $10.00 and the first 50 minutes are free and charges over 50 minutes are $.20 per minute and the other a premium service that's $25 and the first 75 day minutes are free and any over that are $.10 and the first 100 night minutes are free and any over that are $.05. This is what I have so far and for some reason I keep getting errors and I'm not sure why. Also, I'm not sure where to calculate the night minutes over 100. Any ideas?
#include <iostream> using namespace std; const double rServiceCost = 10.00; const double rPerMinuteCharge = .20; const double pServiceCost = 25.00; const double pDayPerMinuteCharge = .10; const double pNightPerMinuteCharge = .05; int main() { int minutes; int dayMinutes; int nightMinutes; char serviceCode; int accountNumber; double amountDue; cout << "Enter an account number: "; cin >> accountNumber; cout << endl; cout << "Enter a service code: R or r (regular), P or p (premium): "; cin >> serviceCode; cout << endl; switch (serviceCode) { case 'r': case 'R': cout << "Enter the number of minutes used: "; cin >> minutes; cout << endl; if (minutes > 50) amountDue = rServiceCost + ((minutes - 50) * rPerMinuteCharge); cout << "Account number = " << accountNumber << endl; cout << "Amount Due = $" ,, amountDue << endl; else amountDue = rServiceCost; break; case 'p': case 'P': cout << "Enter the number of day minutes used: "; cin >> dayMinutes; cout << endl; cout << "Enter the number of night minutes used: "; cin >> nightMinutes; cout << endl; if dayMinutes > 75 amountDue = pServiceCost + ((minutes - 75) * pDayPerMinuteCharge); else amountDue = pServiceCost; } return 0; } | https://www.daniweb.com/programming/software-development/threads/19718/cell-bil | CC-MAIN-2018-13 | refinedweb | 261 | 60.14 |
hello everyone,
i have simple question
is there way to repeat Ammo depend on INT?
i mean for example i have
Public Int Ammo = 20;
i want to draw GUI Texture [of Small Bullets] 20 Time
and Thanks
Answer by robertbu
·
Aug 08, 2013 at 01:08 AM
Here is a bit of code that does what you asked for. Attach it to a game object, drag your bullet texture to the 'texture' variable, and adjust any of the other settings.
using UnityEngine;
using System.Collections;
public class RowOfIcons : MonoBehaviour {
public Texture texture;
public Vector2 startPosition = new Vector2(20,20); // Start position to display;
public int spaceBetween = 10;
public int number = 10; // Number to display;
private Rect rect;
void Start() {
rect.x = startPosition.x;
rect.y = startPosition.y;
rect.width = texture.width;
rect.height = texture.height;
}
void OnGUI() {
Rect rectT = rect;
for (int i = 0; i < number; i++) {
GUI.DrawTexture(rectT, texture);
rectT.x += texture.width + spaceBetween;
}
}
}
There are a couple of issues with this code that you should be aware of. First, since it uses GUI, you will have 1 drawcall per icon. Second, since it works in pixels, the amount of screen real estate the icons take will vary from device to device.
Answer by mactinite77
·
Aug 08, 2013 at 01:28 AM
To draw a GUITexture multiple times you can use a for loop. For example :
for( int i = 0; i < Ammo; i++){
GUI.DrawTexture(new Rect(AmmoTexture.width * i,
AmmoTexture.height, AmmoTexture.width, AmmoTexture.height );
}
This will draw your bullet texture on the screen starting in the upper left corner, to change where it will draw add what you would normally put to place it to the first two
How to make duplicated objects unique from each other?
1
Answer
Why is my texture like this?
1
Answer
throw an object by swiping on it
0
Answers
Creating Loading Bar For web Player
1
Answer | https://answers.unity.com/questions/510229/unity3d-ammo-texture-repeat.html | CC-MAIN-2019-35 | refinedweb | 319 | 61.26 |
This is a direct continuation of Angular Lesson 2.
Ultimately, I just wanted computed values in Angular—like Vue has computed values. A getter function that does some computation on the fly every time its property is accessed is not the same thing. Vue’s computed values intelligently watch for changes to the relevant data, and they only compute the computed value again if one of incoming values has changed.
Googling and such led me to MobX and to the package mobx-angular. Mobx-angular purports to be a connector that makes MobX work well with Angular. On first inspection, I did not like the look of mobx-angular because nobody has touched anything but the readme file in years, so I tried to see if you can just directly use MobX in Angular out of the box. MobX itself is a large, robust, professional, up-to-date project.
Actually using MobX in real life for some tiny little widget like I was making would almost definitely be a bad idea. MobX is quite large, and if I only want it for the sake of mimicking computed values, that may not be a good enough reason to pay 37 kb gzipped. That said, I also wanted to learn a little about MobX for its own sake, so I dove in.
MobX is a framework-agnostic library for state management, but after days of sifting through docs and online examples I do strongly get the sense that people like to use it with React. React is outside the scope of this blog series.
On the face of it, it appears you can simply import these observable and computed functions from mobx and they would solve exactly my problem.
import { observable, computed } from 'mobx';. Then you can use them as decorators, or so one thinks. See Angular Lesson 2 for a description of what actually happened.
After all this, I reluctantly took another look at mobx-angular. Let me explain why I’m so skeptical and confused by mobx-angular. I find many references to it online, and according to the npm website, it is downloaded 20 thousand times a week. However, its own github page as of this moment clearly says it is failing to build. On top of that, based on the times since anything was last updated, it looks like this package has fallen into neglect. Code that is downloaded thousands of times per week should be maintained. I’m really not trying to assassinate the code author here. Clearly he wrote something people like, but something is fishy, so I decided to investigate further.
A nice thing about npm packages with source code published on Github is that you can pull the source. You can try to build it yourself. You can execute tests yourself. Here’s what I found when I tried to pull and build mobx-angular: pages and pages of warnings about dependencies being deprecated, several outright errors. The build script did produce something, but when I tried to execute the package’s own testing suite, it caused a fatal error. I’m not saying the tests failed. I’m saying running the test suite failed to execute to completion.
I did that investigation and decided that until somebody brought the source code up to standards, I would never trust this package enough for use in anything important. Eventually, however, I did decide to just try the pre-built version of mobx-angular that is available published to npm. As of this writing, it is version 3.0.1.
Simply put, it works. At least it works in one example that I built. I had to make a few adjustments to my code. I did not have to change my function definitions. I was still able to simply declare things @observable or @computed using those nice, declarative decorators. But in the template I needed to add a directive up top like this
<div *mobxAutorun>. Also over in the app.module.ts file I had to import MobxAngularModule and add it to both the imports and the exports arrays inside the @NgModule decorator. I really have no idea what that does. I was following a post written by the mobx-angular author here:.
I don’t plan to maintain ignorance on what these things do, but let’s not get into it just this second.
After I made these adjustments, I found that altering a state variable unrelated to my computed function did not cause the engine to re-compute. That is what I was chasing. Huzzah! I like that. But this does not make up for shaky source code long left in disrepair.
Mobx-angular is not a large project. I see 4 small source files and an MIT license. I’m strongly considering fixing this package myself. It sounds like a fun little journey where I get to learn a little more about Angular, typescript directives, MobX, and—ultimately—whether Adam Klein will let me write to his project or if I’ll fork it. No promises, but that sounds like an interesting Lesson 3 to me. Maybe lesson 5.
[Update: Angular Lesson 3 can be found here, but it is not a direct continuation of this post. .]
One thought on “Angular Lesson 2 (Part 2)” | https://adamcross.blog/2019/04/22/angular-lesson-2-part-2/ | CC-MAIN-2020-10 | refinedweb | 882 | 73.47 |
Type not found : FlxText
Hi All!
I just got started with Flixel, and I'm already encountering an issue with the "Hello World!" tutorial. ()
So I followed the steps (using sublime text on Mac OS X) by adding the line with FlxText, but when I tried to compile the project with flash/neko/html5 I got this error:
source/PlayState.hx:9: characters 10-17 : Type not found : FlxText
I installed everything, and tried downgrading lime to 2.9.1 and openfl to 3.6.1.
Any insight on why the library doesn't seem to be working?
- DleanJeans
Did you import
FlxText?
import flixel.text.FlxText;
How about other platforms? Do they work?
@DleanJeans
I didn't import FlxText (I thought it was included in the package).
Thank you so much for the help!
- Gama11 administrators
This should work fine with the fully qualified path as the tutorial instructs?
var text = new flixel.text.FlxText(0, 0, 0, "Hello World", 64); | http://forum.haxeflixel.com/topic/544/type-not-found-flxtext | CC-MAIN-2018-22 | refinedweb | 162 | 69.48 |
I have been playing around with cookies for the first time, and I have saving part of it completed. The data I'm saving are numbers and the most important part of these nubers is that I can add, subtract and so on with these. However when I try to add a number to one of my saved parametres it adds them as if they were text.
Example:
I have a cookie called
value
function readCookie(name) {
return (name = new RegExp('(?:^|;\\s*)' + ('' + name).replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&') + '=([^;]*)').exec(document.cookie)) && name[1];
}
value + 1 = 10
91
value
parseInt()
function readCookie(name) {
return parseInt((name = new RegExp('(?:^|;\\s*)' + ('' + name).replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&') + '=([^;]*)').exec(document.cookie)) && name[1]);
}
The
+ operator in JavaScript can mean mathematical addition or string concatenation. The one you get is based on the implicit type of the operands. If one of the operands is a string, the other will be converted to a string and you'll get concatenation.
The trick is to do the math on the numbers first (you can surround the math portion with parenthesis or do the math in a separate statement) and then inject the result into your string.
To force a string containing a number character into a number, you can use
parseInt() and
parseFloat():
var result = parseInt(value, 10) + 1;
Note that with
parseInt(), you should supply the optional second argument, which specifies the radix for the operation. If the first argument happens to refer to a string that contains a hex value, the result will be based on hex, not base 10. That's why 10 is used in my example.
Also note that both
parseInt() and
parseFloat() stop after finding the first valid characters that could be treated as numbers. So, in a string like this:
"Scott7Marcy9", you would only get the 7. | https://codedump.io/share/nq0cz5ESpLNX/1/javascript-adding-numbers-as-if-they-were-a-string | CC-MAIN-2016-50 | refinedweb | 303 | 59.53 |
Removing items from a list component
In this recipe, we will learn how to drag elements from a
ListView to remove them. We are going to use the
PanResponder to handle the drag events.
Getting ready
We need to create an empty app. For this recipe, we will name it
ListRemoveItems. Feel free to use any other name, just make sure to use the correct name in step 1.
We also need to create three files:
src/MainApp.js,
src/ContactList/index.js, and
src/ContactList/ContactItem.js.
How to do it...
- Open the
index.ios.jsand
index.android.jsfiles, remove the existing code, and add the following:
import React from 'react'; import { AppRegistry } from 'react-native'; import MainApp from './src/MainApp'; AppRegistry.registerComponent('ListRemoveItems', ...
Get React Native Cookbook now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | https://www.oreilly.com/library/view/react-native-cookbook/9781786462558/ch03s07.html | CC-MAIN-2020-29 | refinedweb | 151 | 60.21 |
When it comes to using Hibernate, most projects only use the very basic features. This is mostly due to naivety or unfamiliarity with the product. Hibernate is a very mature and feature rich product that can be used to solve a lot of basic or advanced problems. I think the point here is to put the complexity at the right level: You can always take a very basic approach to using Hibernate and solve the mismatch between your mapping and your model in your model, but that would put complexity in your model that is basically boilerplate code. It is only there because the persistence layer is not used correctly. If you make the persistence layer (in this case the Hibernate mapping) more complex it will be harder to maintain, but your model code will be more concise and easier to understand.
One of the features of Hibernate that is hardly ever used, but has some very interesting applications is the ability to map java.util.Maps. This blog will show an application of using a Map in your model and mapping that model with Hibernate to a Relational Database Schema.
Say one day you want to manage all your recipes in a on-line cookbook. Before digging in and churning out those lines of code, you take a step back and ask yourself: 'What is in this cookbook?'. Basically a cookbook is a collection of recipes and one of the interesting parts of a recipe is that it has ingredients. Well actually, I would say that in a recipe you use a specified amount of an ingredient. So the relationship between recipe and ingredient is not a normal parent-child or even many-to-many association. This is a relationship with an attribute: the attribute is the amount of ingredient you use in a recipe.
Good, so now we have that out of the way we can start coding! Since this will be a web application we turn to Grails, of course, to be sure we can start using our cookbook tonight. So using Grails-1.0-RC-1. we find a good spot in the directory structure and type
C:\projects\grails> grails create-app cookbook
This will basically create the infrastructure structure for the grails application. A bunch of directories and file are created below the "cookbook" directory. Now we want to add a few domain classes. We have recognized the following entities: Recipe, Ingredient and Unit. The Unit entity expresses the unit in which the amount of ingredient to use in a recipe is expressed. An example would be that you use 500 grams of sugar in an apple pie recipe. The unit here is 'grams'. So lets add those entities.
C:\projects\grails\cookbook> grails create-domain-class Recipe
C:\projects\grails\cookbook> grails create-domain-class Ingredient
C:\projects\grails\cookbook> grails create-domain-class Unit
These are very simple commands that will basically create two files: one in the source directory and one in the test directory. Lets edit the Ingredient.groovy and Unit.groovy files to add the simple properties that these entities have.
Ingredient.groovy:
class Ingredient { String name }
Unit.groovy
class Unit { String name }
Thats all we need for now. The Recipe entity will be a bit more exiting. Thinking back to short discussion of what a recipe is it contains ingredients, but it also wants to know about the amount of each ingredient. So basically it creates a map of ingredients in which the key is an Ingredient and the value is the amount of ingredient you use in the Recipe. The amount will be a value (component, embeddable) object, since it is not an entity but it is a property of the relation between Recipe and Ingredient. The amount component has a reference to the Unit entity. To tell Grails that the Amount class is not an entity, we put the Amount.groovy file into the src/groovy directory.
Recipe.groovy
class Recipe { String name Map<Ingredient, Amount> ingredients }
Amount.groovy (in src/groovy)
class Amount { int value Unit unit }
So now comes the full power of Grails. It is time to create the controllers and views for these entities.
C:\projects\grails\cookbook> grails generate-all Recipe
C:\projects\grails\cookbook> grails generate-all Ingredient
C:\projects\grails\cookbook> grails generate-all Unit
After this we start the grails container and look at the application. The main page pops up and although you will want to change this eventually it suffices to complete this example. If you look at the Ingredients and Unit controller, than they simply work out of the box. They better would, because they are basically the simplest thing possible: a thing with a name. Using these controllers you can add, list, update and remove the ingredients and the units in the system.
Now we turn to the Recipe controller. The Recipe controller is not fit for the tasks that we would like to do with it. If you open the create or edit view, you'll be able to edit the name property, but the ingredients property is not editable. The show view shows the Map, but as a simple property and not as a collection.
That is not what we want. We want to add, list, update and remove ingredients for a recipe. Now we need to change the view and controller for Recipe, but I'm a bit suspicious about the data model at this point. So lets look at the schema that Hibernate generated.
All tables are disconnected! There are no foreign keys in the system. The reason is that Hibernate by default will store the keys and values in the map as single values. Both IDX and ELT columns in the RECIPE_INGREDIENTS are VARCHAR typed. Before we fix the user interface, lets try and fix this problem.
An approach that might look simple is to forget about the whole Map and use a Set of some entity or component class that will have a reference to the ingredient and an amount. But what would that class represent? It would be some sort of IngredientUse or something, representing the usage of an ingredient in a recipe. To me that doesn't really feel right. In the domain we don't normally have such a thing as a IngredientUse. We just use a certain amount of an Ingredient in a Recipe. So to stick with this model we have to make the persistence layer just a little bit smarter. Actually adding 2 standard hibernate configuration files in the grails-app/conf/hibernate directory will do just that.
hibernate.cfg.xml
<?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" ""> <hibernate-configuration> <session-factory> <mapping resource="Recipe.hbm.xml"/> </session-factory> </hibernate-configuration>
Recipe.hbm.xml
<?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" ""> <hibernate-mapping <class name="Recipe" table="recipe"> <id name="id" column="id"> <generator class="native" /> </id> <property name="name" column="name" /> <map name="ingredients" table="RECIPE_INGREDIENTS" cascade="all-delete-orphan" lazy="false"> <key column="recipe_id" not- <map-key-many-to-many <composite-element <property name="value"/> <many-to-one </composite-element> </map> </class> </hibernate-mapping>
In addition we have to add an id property (and optionally a version property for optimistic locking) to the Recipe class.
If we now restart the server and look at the newly created database schema, it will look like the image below, now containing the foreign keys as expected and with some more readable column names.
Now to use enable editting the map via the web application, we have to add some methods to the RecipeController. The methods will change the contents of the ingredients map in a specfic recipe.
def addIngredient = { doWith( params.id, params.ingredientId ) { Recipe recipe, Ingredient ingredient -> def unit = Unit.get( params.unitId ) def amount = new Amount(value : Integer.parseInt(params.amount), unit : unit) recipe.ingredients.put(ingredient, amount) } } def updateIngredient = { doWith( params.id, params.ingredientId ) { Recipe recipe, Ingredient ingredient -> recipe.ingredients.get(ingredient).value = Integer.parseInt(params.amount) } } def deleteIngredient = { doWith( params.id, params.ingredientId ) { Recipe recipe, Ingredient ingredient -> recipe.ingredients.remove(ingredient) } } private doWith (recipeId, ingredientId, cl) { def recipe = Recipe.get( recipeId ) if (recipe) { def ingredient = Ingredient.get( ingredientId ) if (ingredient) { cl.call(recipe, ingredient) redirect(action:show,id:recipe.id) } else { flash.message = "Ingredient not found with id ${ingredientId}" redirect(action:edit,id:recipeId) } } else { flash.message = "Recipe not found with id ${recipeId}" redirect(action:edit,id:recipeId) } }
The controller methods are pretty straight forward: The fetch the necessary entities, do some validation and finally manipulate the ingredients map. The changes are persisted using Hibernate transparantly at commit time.
As far as the model and the Hibernate mapping is concerned we could stop here, but we'll finish the view part of the application just for fun!! In the edit.gsp, create.gsp, list.gsp and show.gsp for recipe we will remove the elements that show the ingredients property, since the representation is not very useful. There are several places that we could use to provide a User Interface to manipulate the map. I've choosen to add the components to the show view of the recipe, turning that essentially in a show view of the recipe and an edit view for the ingredients collection. This could be moved to the edit view, but that would result in a view with a lot of buttons that manipulate different entities. Anyway, either user interface design would be possible with this model and controller design.
fragment of show.gsp
<tr class="prop"> <td valign="top" class="name">Ingredients:</td> <td valign="top" class="value"> <table> <g:each <g:form <tr> <input type="hidden" name="id" value="${recipe?.id}" /> <td> <input type="text" id="amount" name="amount" value="${fieldValue(bean:entry.value,field:'value')}"/></td> <td>${entry.value.unit.name}</td> <td>of</td> <td>${entry.key.name}</td> <input type="hidden" name="ingredientId" value="${entry.key.id}" /> <td><span class="button"><g:actionSubmit</span></td> <td><span class="button"><g:actionSubmit</span></td> </tr> </g:form> </g:each> <g:form <tr> <input type="hidden" name="id" value="${recipe?.id}" /> <td> <input type="text" id="amount" name="amount"/></td> <td><g:select</td> <td>of</td> <td><g:select</td> <td> </td> <td><span class="button"><g:actionSubmit</span></td> </tr> </g:form> </table> </td> </tr>
The <g:actionSubmit> tags will turn in to buttons, that will trigger the updateIngredient, deleteIngredient and addIngredient methods on the RecipeController. This enables the user to manipulate the ingredient map for the recipe.
Now this application can be used to easily manage a collection of recipes with ingredients. You can find all sources of this application in the attached Zip file.
When finding a mismatch between a simple Hibernate mapping and an expressive model, try to stay true to the model. Using a little more complex mapping will keep the model simple, expressive and easy to use. In some cases a Map can be used to express a complex association between entities that has properties.
In the process we have seen that it is easy and fun to develop web applications in Grails. Hibernate mappings in XML files can be added to Grails to enhance the mapping features that Grails provides out of the box.
Hi Maarten,
Great stuff. I agree that we should focus on the model. Using these hints will allow us to do so.
I am only thinking if I would choose to create a Unit table, it feels a bit like over-normalizing to me. Instead you could also choose to create an enum for Units and just store it as text in the RECIPE_INGREDIENTS table. The hbm file will then be something like this:
instead of:
I think this will do it:
See for an example of how to implement the UnitUserType.
BTW I could not download the attached zip file, so I could test my proposed solution (I didn’t feel like copy-pasting ;-))
Lars
Hi Lars,
Thanks for your comment! I fixed the download link for the Zip file, it should work now.
Regarding your comment on Enum vs. Table, I would say that it depends on your requirements. You could certainly do as you suggest and use a CustomUserType to implement persisting an enum. I could for see in this case the need to add Units add run time (who ever though you needed both tablespoon and teaspoon as units for receipts ;)). It would be quite a pain if you would have to change a .java file and recompile and rebuild the whole app… o wait, this is Grails… nevermind
Cheers!
-Maarten
great!
I’m have been googling at how to store a map from an entity to a value, like you are doing here, but with hibernate-annotations.
if you could add that it would be great! | http://blog.xebia.com/2007/10/22/advanced-hibernate-maps-part-1-complex-associations/ | crawl-002 | refinedweb | 2,163 | 56.45 |
Working with Server Side Includes
Working with Server Side Includes¶
In a similar way as ESI (Edge Side Includes), SSI can be used to control HTTP caching on fragments of a response. The most important difference that is SSI is known directly by most web servers like Apache, Nginx etc.
The SSI instructions are done via HTML comments:
There are some other available directives but
Symfony manages only the
#include virtual one.
Caution
Be careful with SSI, your website may be victim of injections. Please read this OWASP article first!
When the web server reads an SSI directive, it requests the given URI or gives directly from its cache. It repeats this process until there is no more SSI directives to handle. Then, it merges all responses into one and sends it to the client.
Using SSI in Symfony¶
First, to use SSI, be sure to enable it in your application configuration:
- YAML
- XML
- PHP
Suppose you have a page with private content like a Profile page and you want to cache a static GDPR content block. With SSI, you can add some expiration on this block and keep the page private:
// src/Controller/ProfileController.php namespace App\Controller; // ... class ProfileController extends AbstractController { public function index(): Response { // by default, responses are private return $this->render('profile/index.html.twig'); } public function gdpr(): Response { $response = $this->render('profile/gdpr.html.twig'); // sets to public and adds some expiration $response->setSharedMaxAge(600); return $response; } }
The profile index page has not public caching, but the GDPR block has 10 minutes of expiration. Let’s include this block into the main one:
The
render_ssi twig helper will generate something like:
render_ssi ensures that SSI directive are generated only if the request
has the header requirement like
Surrogate-Capability: device="SSI/1.0"
(normally given by the web server).
Otherwise it will embed directly the sub-response.
Note
For more information about Symfony cache fragments, take a tour on the ESI documentation.
This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license. | https://symfony.com/doc/current/http_cache/ssi.html | CC-MAIN-2021-21 | refinedweb | 344 | 54.42 |
Results 1 to 1 of 1
Thread: Blank screen
Blank screen
- Member Since
- Dec 22, 2007
- 1
hello friends
i am not sure if this is right place to post so sorry in advance if it is not.
i am very interested in learning C++ so i started it with Xcode.
i do not have any previous knowledge of C++. i wrote very first basic program which is as follows
#include <iostream>
using namespace std;
int main()
{
cout<<"Welcome to C++";
return 0;
}
with this code i am having error message about int main() saying it is repeating so i changed it my new code is like as follows
#include <iostream>
using namespace std;
int mainfunction()
{
cout<<"Welcome to C++";
return 0;
}
with this code compile is successful build is successful no error message.
Now real problem is that when i run it, it produce only blank screen, nothing what so ever on it. Please can anyone advise me whats wrong with it why it does not show "Welcome to C++"
Any advise is appreciated
Thanks in advance
Thread Information
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
Similar Threads
blank screenBy beglover in forum OS X - Operating SystemReplies: 4Last Post: 09-07-2012, 03:05 PM
Blank out screenBy Cairncastle in forum OS X - Operating SystemReplies: 2Last Post: 01-22-2012, 07:28 AM
Blank ScreenBy eddie in forum Apple NotebooksReplies: 1Last Post: 04-06-2010, 06:23 AM
blank screenBy eddie in forum Apple NotebooksReplies: 0Last Post: 04-03-2010, 09:39 PM
blank pb G4 screenBy pescadoramd in forum Apple NotebooksReplies: 0Last Post: 02-09-2004, 05:01 PM | http://www.mac-forums.com/forums/os-x-apps-games/89471-blank-screen.html | CC-MAIN-2017-09 | refinedweb | 281 | 50.13 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
LEARNING OBJECTIVES
2
Explain the general concept of the opportunity cost of capital Distinguish between the project cost of capital and the firm¶s cost of capital Learn about the methods of calculating component cost of capital and the weighted average cost of capital Illustrate the cost of capital calculation for a real company
3 What is COST OF CAPITAL ? .
the firm¶s cost of capital is the rate of return required by them for supplying capital for financing the firm¶s investment projects by purchasing various securities.Cost of Capital 4 Viewed from all investors¶ point of view. Compensation for TIME and RISK The rate of return required by all investors will be an overall rate of return ² a weighted rate of return. .
which depends on the riskiness of its cash flows.5 The project¶s cost of capital is the minimum required rate of return on funds committed to the project. required rate of return on the aggregate of investment projects . or average. ±Individual project The firm¶s cost of capital will be the overall.
SIGNIFICANCE OF THE COST OF CAPITAL 6 Evaluating investment decisions Designing a firm¶s debt policy Appraising the financial performance of top management .
Risk-return relationships of various securities .CONCEPT OF THE OPPORTUNITY COST OF CAPITAL 7 The opportunity cost is the rate of return foregone on the next best alternative investment opportunity of comparable risk.
OPPORTUNITY COST OF CAPITAL 8 Rate of return required by shareholders on securities of comparable RISK Company must pay same to attract capital .
In an all-equity financed firm. which will depend only on the business risk of the firm. the firm¶s cost of capital is equal to the opportunity cost of equity capital.Shareholders· Opportunities and Values 9 The required rate of return (or the opportunity cost of capital) of shareholders is market-determined. the equity capital of ordinary shareholders is the only source to finance investment projects. .
Corporate bonds are riskier than government bonds since it is very unlikely that the government will default in its obligation to pay interest and principal. The firm is under a legal obligation to pay interest and repay principal. . There is a probability that it may default on its obligation to pay interest and principal.Creditors· Claims and Opportunities 10 Creditors have a priority claim over the firm¶s assets and cash flows.
11 What is average cost of capital ? What is Marginal cost of capital ? .
Marginal cost is the new or the incremental cost that the firm incurs if it were to raise capital now. The overall cost is also called the weighted average cost of capital (WACC).Weighted Average Cost of Capital vs. or in the near future. or specific. The historical cost that was incurred in the past in raising capital is not relevant in financial decision-making. Specific Costs of Capital 12 The cost of capital of each source of capital is known as component. . cost of capital. Relevant cost in the investment decisions is the future cost or the marginal cost.
DETERMINING COMPONENT COSTS OF CAPITAL 13 Investors¶ required rate of return should be adjusted for taxes in practice for calculating the cost of a specific source of capital to the firm.K(1-T) . Tax adjustment .
In such a case.Cost of the Existing Debt 14 Sometimes a firm may like to compute the ³current´ cost of its existing debt. the cost of debt should be approximated by the current market yield of the debt. .
000 and its share is selling at a market price of Rs 80.EPS 15 A firm is currently earning Rs 100. What is the cost of equity? We can use expected earnings-price ratio to compute the cost of equity. Thus: .000 shares outstanding and has no debt. and it has a payout ratio of 100 per cent. The earnings of the firm are expected to remain stable. The firm has 10.
the required rate of return on equity is given by the following relationship: k e ! R f (R m R f )F j Equation requires the following three parameters to estimate a firm¶s cost of equity: The risk-free rate (Rf) The market risk premium (Rm ± Rf) The beta of the firm¶s share (F) .CAPITAL ASSET PRICING MODEL (CAPM) 16 As per the CAPM.
the market risk premium is 9 per cent and beta of L&T¶s share is 1. The cost of equity for L&T is: .54.Example 17 Suppose in the year 2002 the risk-free rate is 6 per cent.
Cost of equity under CAPM 18 .
COST OF EQUITY: 19 CAPM has a wider application although it is based on restrictive assumptions. they are common to all companies. All variables in the CAPM are market determined and except the company specific share price data. is that it does not probably remain stable over time . . however. One practical problem with the use of beta. The only condition for its use is that the company¶s share is quoted on the stock exchange. The value of beta is determined in an objective manner by using sound statistical methods.
that is. the weighted average cost of new capital given the firm¶s target capital structure.WEIGHTED AVERAGE COST OF CAPITAL 20 The following steps are involved for calculating the firm¶s WACC: Calculate the cost of specific sources of funds Multiply the cost of each source by its proportion in the capital structure. Add the weighted component costs to get the WACC. . k k o o !k !k d d (1 T ) (1 T ) d k d e e D k D E E D E WACC is in fact the weighted marginal cost of capital (WMCC).
Cost of Capital for Projects Risk Adjustment 21 A simple practical approach to incorporate risk differences in projects is to adjust the firm¶s WACC (upwards or downwards). One approach is to divide projects into broad risk classes. and use different discount rates based on the decision maker¶s experience. . and use the adjusted WACC to evaluate the investment project: Companies in practice may develop policy guidelines for incorporating the project risk differences.
projects may be classified as: Low risk projects Medium risk projects High risk projects discount rate < the firm¶s WACC discount rate = the firm¶s WACC discount rate > the firm¶s WACC .Cost of Capital for Projects 22 For example. | https://www.scribd.com/presentation/51197502/9-COST-OF-CAPITAL | CC-MAIN-2017-39 | refinedweb | 1,089 | 56.45 |
We recently published a series discussing the views of our Founders on hereditary fortunes: they were vehemently against them. Yes, they didn't much like taxes either, but for many of our Founders, estate taxes were the lesser evil when compared to hereditary wealth. Eliminating the ability of rich, powerful families to retain that wealth over many generations was more important than minimizing taxation.
Our founders weren't the only ones who saw problems with long-term concentration of wealth.
What is most important for democracy is not that great fortunes should not exist, but that great fortunes should not remain in the same hands. In that way there are rich men, but they do not form a class.
Alexis de Tocqueville
The reaction of our readership, who tend towards the conservative, was not entirely approving. Why would we propose increasing taxes? Aren't we all about decreasing the power and reach of government?
And most particularly - by specifically targeting one segment of the populace, namely rich heirs, wouldn't we be engaging in social engineering that is wrong in principle?
Let's start with first principles. As much as conservatives hate taxes and government power, we nearly all agree that there has to be some minimum degree of government power in order to keep civilization around. To fund that, there has to be some level of taxation. We may argue over the appropriate levels, but no serious person argues for no government or no taxation at all. The only thing worse than a bad government is an anarchy - otherwise Somalia would be booming.
So given that there have to be taxes, it follows that something in
particular has to be taxed. Do we want to put a tax on
property? On income? On sales, perhaps? Maybe
imports, or exports? Tea? Sugary drinks?
Our Founders intended the Federal government to subsist mostly off import duties. They picked this on purpose: it was their express intent that, by taxing imports, domestic manufacturing and production would be encouraged. To cite but two:
Indeed we have already been too long subject to British prejudices. I use no porter or cheese in my family, but such as is made in America.
- George Washington
The
If it cost England $100 to manufacture an item, but there was a 10%
import tax, then an American firm could still compete even if it cost
them $109 to manufacture the same item, because the import, once the
tariffs were added in, would cost $110. And that's without
accounting for patriotism which, according to Jefferson, called for
citizens to buy locally even if it cost them more.
In economic terms this seems foolish, because it raises the cost of goods to the customers, but our Founders were thinking long term. They knew that American industry, being in its infancy, would have a hard time competing against the well-established and well-funded British enterprises. They needed a helping hand much as the Chinese governmetn felt that their industries needed a helping hand.
In other words, the Founders favored customs duties to explicitly engage in social engineering, picking economic winners and losers: They wanted American firms to have an easier time of things, and British a harder.
Would this have an even effect on everything? Of course not! Some goods were already cheaper to produce in America. Others would be impractical for a long time to come due to economies of scale. The Founders didn't care; they wanted to put a heavy thumb on their side of the scale.
The reason conservatives instinctively recoil from using taxation in this way is because our government has long established a powerful reputation of being bad at it. As Donald Trump repeatedly pointed out during his campagin, we don't seem able to negotiate good trade deals that benefit America, American consumers, American workers, and American businesses. So if you're incompetent at something, it's not wholly illogical to try not to do it.
But that doesn't escape the fact that all taxation is social engineering. If you tax tea, people will drink more coffee. If you tax waged income, people will earn less in wages and will try to move income to other forms such as capital gains. If you provide tax deductions for charitable donations, people will - surprise! - donate more to charity. We do this sort of social engineering all the time, thinking nothing of it.
Now, as with tariffs, these manipuations often don't work as planned. Many cities tax cigarettes enormously in an attempt to discourage their use; all they actually accomplish is taking even more money from the poor who smoke. This also creates a black market of smuggling untaxed cigarettes that provides great wealth to criminals while requiring cops to enforce yet one more law.
As with any weapon, taxes can be used to a wide range of destructive effect. President Obama understood this well:
If somebody wants to build a coal-fired power plant, they can. It’s just that it will bankrupt them... Under my plan … electricity rates would necessarily skyrocket.
Mr. Obama knew he wouldn't get away with actually outlawing coal; he simply wanted to make the regulations and taxes so costly that nobody would use it anymore.
Whole books have been written about this approach to governance: don't ban things, simply discourage people from doing them using taxation. The Left uses it constantly: we see congestion charging introduced to discourage driving and encourage uneconomic mass transit, for instance, and in California the government charges massive fees to discourage people from building new housing or from using plastic straws.
Given that we have to have taxes, and taxes by their nature discourage whatever is being taxed, why can't conservatives wield the same weapon? We've already seen how vast fortunes left unspent when their original earner dies over time almost always end up funding the Left. So taxing vast fortunes, in addition to being recommended by the Founders, is a good way to remove money from the opposition.
President Trump and the Republicans in Congress, in one of their precious few joint accomplishments, managed to cap the deduction of state income tax against federal taxes. This means the people in high-tax states - mostly Democrats - will be paying more in they were before. We also know that wealthy people tend to be Democrats, so why wouldn't we want to take money away from them that otherwise will likely fund our enemies? Taxes have to be paid by somebody, why not them?
Elite private colleges - universally leftist - sit on billion-dollar endowments which make them largely independent of economic pressure, and which, being "educational" institutions, are untaxed. In effect, the Ivies have become a generations-descended heir who've long abandoned the principles of those who earned the money. Why not tax these stockpiles that drive Leftist indoctrination, a change that even some on the Left support?
A few moment's contemplation reveals a myriad of other opportunities to use the legitimate power of taxation in a strategic way to make America better off. Political campaigns are already taxed - why not political pressure groups, currently mostly nonprofits? How about a new Stamp Act of heavy charges on all legal filings - surely we want to discourage those, while equally surely not eliminating courts?
All this talk of raising taxes will raise the hackles of any true conservative - after all, we don't want government to have even more resources than it already does. If we choose to raise a tax for the purpose of harming something we want harmed, we should also lower taxes on things we want to promote.
We already don't tax churches, and that's good.
For decades, we haven't allowed states to tax stores that sell by mail from some other state, but the Supreme Court recently changed the rules to allow it - we need to fix that, to remove the onerous burden of compliance to an infinitude of taxing entities from small businesses.
Work is something we want to encourage - why, then, do we tax it? We also want to encourage families, and we do have tax deductions for children, but the deduction is a tiny fraction of what it actually costs to raise a child. If a parent is raising the next generation of taxpayers, isn't it fair for them to have full relief from taxes?
And since we all pay property taxes which pay for public schools, even those of us who don't want to send our children to them, why shouldn't people be allowed to deduct the costs of private school or homeschool?
If your employer pays for your health insurance, you don't pay income tax on that money. If you pay for your own health insurance, though, you do, and that's insane.
Of course, each and every one of these changes would have unintended consequences. But then, so does every law - it's not possible to think of everything. And unless we are satisfied with things exactly as they are, we have to change something.
Why not make social-engineering taxation changes that would hurt the opposition and help our side? That's what they've been doing for a hundred years, and it's seemed to work pretty well for them.
Over the past five years, the editors have been secretly working on a book that summarizes the fundamental viewpoints of Scragged.
Well heck. YOU be the clay, and I will be the sculptor.
Tough sell to screw your kids after you have worked hard to give them some inheritance. This seems to assume that anybody who passes funds on to their kids is talking millions of $ . For many it’s less . Not sure they should be penalized along with the Bloomberg and Sanders heirs.
Even though Herman Cain is the piñata du jour, may make sense to relook at his 9-9-9 plan. Everybody gets screwed...consumers, working stiffs, and corporations. Helps assure that less is left for the next generation. Which I guess your goal.
Hardly. The Founders were opposed to hereditary *fortunes*. Think Bill Gates, Mark Zuckerberg, Jeff Bezos sort of people. Obviously we don't think *all* inheritances should be taxed away. Say, over $100 million - and even then, the point wasn't to give the government money, but to incentivize the super-wealthy to use it wisely and charitably before they go.
Interesting discussion with a co-worker yesterday, while thinking about this series of articles. A few years ago the grid operator tripled the cap on the wholesale price of electricity. Many thought it was a get-rich-quick scheme to benefit the electric companies. A little thought at the time led me to think that the market price would never get to the cap because enough generation would be encouraged to be built and that's exactly what happened. In fact, a couple of years after the move the market price failed to reach the original cap because of the generation that was built in response to removing the price control (tripling the cap).
Just like the author has pointed out, the grid operator did not intend to pay that higher wholesale price any more than the founders expected to collect on an estate tax. I know that even in my meager tax situation, I go out of my way to avoid flushing money down the federal toilet, and anyone capable of building a fortune would do likewise, and still manage to provide a reasonable inheritance for his children while not depriving them of the joy of work and accomplishment.
When you consider that tax avoidance is a primary objective of anyone in big business, the thesis makes complete sense. However, most others are tax victims, who just pay and complain- while not taking the time or discipline to arrange their matters to minimize impact.
One should not forget that generational wealth usually ends up funding leftist causes...I think we all know of some examples of "trust fund babies" in our lives who were not made better by having every need effortlessly met.
Thank you for a very enlightening series.
bsinn, I don't get the impression they were against inheritance per se, just megadynasties resulting in a permanent wealthy class likely to conspire to exclude others from obtaining wealth. One of the assumptions of a free market is that the market is effectively unlimited. That is not strictly true, and it is apparent that some of the old moneyed families are literally born into power. If Congress were to place a cap on inheritance indexed to the median working wage, statistically most family fortunes would decline over a few generations. So, say the median salary is $50k (it is a bit higher, but this is for easy math) and the index said you could bequeath 100x the median salary. That would be $5 million - enough to immediately retire on, but not enough to be an automatic Prince. I've thought something like this would be a positive thing for decades, but did not realize until reading this article that the Founders had so clearly addressed inheritance. | http://www.scragged.com/articles/who-s-afraid-of-social-engineering | CC-MAIN-2020-34 | refinedweb | 2,205 | 60.95 |
Use ScrollBar when "import QtQuick.Controls 2.0 as 'SomeTag'"
- carles.sole.grau
Hi everybody,
I have a problem when I'm trying to add a ScrollBar in a Flickable.
My problem is that I import QtQuick.Controls 2.0 as => "import QtQuick.Controls 2.0 as 'SomeTag'" because I want to differentiate between QtQuick.Controls 1.4.
The example in the doc is like this:
Flickable { // ... ScrollBar.vertical: ScrollBar { } }
But to fit in my Code I have to use:
import QtQuick 2.7 import QtQuick.Controls 2.0 as Phone import QtQuick.Controls 1.4 as Desktop Phone.Page{ title: "MyPage" Flickable { id: flickable anchors.fill: parent contentHeight: flickableChildItem.height boundsBehavior: Flickable.StopAtBounds Phone.ScrollBar.vertical: Phone.ScrollBar{} }//Flickable }//Page
When I try to run my code, this problem appears:
qrc:/QML/MyPage.qml:7:5: Cannot assign object to read only list
I would like to keep this:
import QtQuick 2.7 import QtQuick.Controls 2.0 as Phone import QtQuick.Controls 1.4 as Desktop
Because sometimes I like to use some QtQuick.Controls 1.4 Item.
Thank you very much | https://forum.qt.io/topic/74617/use-scrollbar-when-import-qtquick-controls-2-0-as-sometag | CC-MAIN-2017-51 | refinedweb | 184 | 63.96 |
Created on 2017-05-24 20:41 by Aaron Hall, last changed 2019-11-16 01:14 by cjw296. This issue is now closed.
We have __slots__ with other ABC's, see and.
There are no downsides to having empty slots on a non-instantiable class, but it does give the option of denying __dict__ creation for subclassers.
The possibility of breaking is for someone using __slots__ but relying on __dict__ creation in a subclass - they will have to explicitly add "__dict__" to __slots__. Since we have added __slots__ to other ABC's,
I will provide a PR soon on this. Diff should look like this (in Lib/abc.py):
class ABC(metaclass=ABCMeta):
"""Helper class that provides a standard way to create an ABC using
inheritance.
"""
- pass
+ __slots__ = ()
(I also want to add a test for this, and ensure other ABC's also have this if they don't.)
This seems reasonable.
New changeset ff48739ed0a3f366c4d56d3c86a37cbdeec600de by Serhiy Storchaka (Aaron Hall, MBA) in branch 'master':
bpo-30463: Add an empty __slots__ to abc.ABC.
I will note that this means with:
class BaseClass(ABC):
pass
class MyDerivedClass(BaseClass):
def __init__(self, thing):
self.thing = thing
thing = MyDerivedClass()
thing now has both __slots__ and, evidently, a dict. That's a bit weird and confusing. | https://bugs.python.org/issue30463 | CC-MAIN-2020-34 | refinedweb | 213 | 66.13 |
builds server manager undo sets and pushes them on the undo stack. More...
#include <vtkSMUndoStackBuilder.h>
builds server manager undo sets and pushes them on the undo stack.
vtkSMUndoStackBuilder records Server Manager changes that are undo/redo able and collects them. To begin recording such changes one must call Begin(). To end recording use End(). One can have multiple blocks of Begin-End before pushing the changes on the Undo Stack. To push all collected changes onto the Undo Stack as a single undoable step, use PushToStack(). Applications can subclass vtkSMUndoStackBuilder to record GUI related changes and add them to the undo stack.
Definition at line 41 of file vtkSMUndoStackBuilder.h.
Begins monitoring of the vtkSMProxyManager for undoable operations. All noted actions are converted to UndoElements and collected. One vtkUndoElement is created per action. All undo elements become a part of a vtkUndoSet which is pushed on to the Undo Stack on PushToStack().
label is a suggestion for the UndoSet that will be built. If the UndoSet already has elements implying it hasn't been pushed to the stack then the label is ignored.
Stops monitoring of the vtkSMProxyManager for undoable operations. Any changes made to the proxy manager will not be converted to UndoElements. This method does not push the vtkUndoSet of undo elements built. One must call PushToStack() to push the UndoSet to the Undo stack. Alternatively, one can use the EndAndPushToStack() method which combines End() and PushToStack().
Convenience method call End(); PushToStack(); in that order.
Definition at line 67 of file vtkSMUndoStackBuilder.h.
If any undoable changes were recorded by the builder, this will push the vtkUndoSet formed on to the UndoStack. The UndoStack which the builder is building must be set by using SetUndoStack(). If the UndoSet is empty, it is not pushed on the stack. After pushing, the UndoSet is cleared so the builder is ready to collect new modifications.
Discard all recorded changes that haven't been pushed on the UndoStack.
One can add arbritary elements to the active undo set. It is essential that the StateLoader on the UndoStack can handle the arbritary undo elements. If that element has been escaped for any reason, the method will return false;
Get/Set the undo stack that this builder will build.
Get/Set the undo stack that this builder will build.
Reimplemented in pqUndoStackBuilder.
Reimplemented in pqUndoStackBuilder.
Returns if the event raised by the proxy manager should be converted to undo elements.
Definition at line 134 of file vtkSMUndoStackBuilder.h.
Definition at line 126 of file vtkSMUndoStackBuilder.h.
Definition at line 127 of file vtkSMUndoStackBuilder.h.
Definition at line 128 of file vtkSMUndoStackBuilder.h.
Definition at line 144 of file vtkSMUndoStackBuilder.h.
Definition at line 145 of file vtkSMUndoStackBuilder.h. | http://www.paraview.org/ParaView3/Doc/Nightly/html/classvtkSMUndoStackBuilder.html | crawl-003 | refinedweb | 455 | 68.77 |
I am currently posting audio through javascript fetch to a Django server, post-processing it and returning a JsonResponse.
I now would like to display the JsonResponse on my website as they come in.
I understand I need some type of listener in my JS that gets triggered with every POST request that is being made, probably after my fetch function or maybe a separate function?
My Js
function makeLink(){
let blob = new Blob(chunks, {type: media.type })
let fd = new FormData;
fd.append("audioRecording", blob);
fetch("", {method:"POST", body:fd})
.then(response => response.ok)
.then(res => console.log(res))
.catch(err => console.error(err));
}
def audio(request):
if request.method == 'POST':
#Store audio
#Make some API calls
return JsonResponse({"abc":123})
<!DOCTYPE html>
<html>
<body>
<textarea id="text">
Response should be here
</textarea>
</body>
</html>
here my post, which i also commented to your question:
If you want to add your response to your custom textbox, you need to do this with javascript. Actualy you only print your response into the web-console:
console.log(res)
This part could you change to add your response to your textbox. With jquery it could look so:
$('#text').append(res); // If 'res' is your response you want to add
If you want to add something other then only replace 'res'.
Also I would recommend you to work with the django-restframework. I also work with this framework. Try a look -> django-rest-framework | https://codedump.io/share/wpt2fTQlwkO0/1/django---display-jsonresponse-after-fetch-post | CC-MAIN-2018-47 | refinedweb | 241 | 66.44 |
mongosniff crashes on start
Bug Description
== Begin SRU Template ==
[Impact]
* mongosniff crashes when it receives messages with no namespace. The result is that a user cannot even press only the enter key without the mongosniff application crashing.
* mongosniff in Xenial and Yakkety is essentially broken currently because of a check in the code that gets the message namespace, but if there is no namespace it crashes.
[Test Case]
* lxc launch ubuntu-daily:xenial xenial
* lxc exec xenial bash
* apt install mongodb
* mongosniff --source NET lo
* # Open a 2nd terminal and run the following
* lxc exec xenial bash
* mongo
* # press enter a few times or type 'help'
* # Observe mongosniff abort, with a core dump, in the first window
[Regression Potential]
* Users currently experiencing this issue would be expecting a SRU fix to come from us as the application is broken in a major way.
* The only work around it would would require rebuilding mongodb from source with the fix to resolve the issue.
* The change was limited to the mongosniff source code only.
[Other Info]
Ubuntu:
* Xenial x64
Packages:
#user@localhost:~$ for i in `dpkg --get-selection
* mongodb 2.6.10-0ubuntu1
* mongodb-clients 2.6.10-0ubuntu1
* mongodb-server 2.6.10-0ubuntu1
----
Steps to reproduce:
1. start service
2. start sniffer
3. start shell
4. crash sniffer
---
user@localhost:~$ sudo service mongodb start
user@localhost:~$ mongo
MongoDB shell version: 2.6.10
connecting to: test
(...)
user@localhost:~$ sudo mongosniff --source NET lo
sniffing... 27017
127.0.0.1:37522 -->> 127.0.0.1:27017 admin.$cmd 60 bytes id:0 0
query: { whatsmyuri: 1 } ntoreturn: 1 ntoskip: 0
2016-05-
2016-05-
mongosniff(
mongosniff(
mongosniff(
mongosniff() [0x86aca8]
mongosniff(
/usr/lib/
/usr/lib/
/usr/lib/
mongosniff(
mongosniff(
/lib/x86_
mongosniff(
terminate called after throwing an instance of 'mongo:
what(): assertion src/mongo/
Aborted (core dumped)
user@localhost:~$
Hi! Thanks for the report. I tried to reproduce this today and was able to with your comments above. I did have to execute something on the mongo shell and then got the crash.
I will nominate this for Trusty, Xenial, and Yakkety as it is fixed in Zesty.
I believe next steps here are an upload to proposed and then sru verification. Can someone confirm?
I've just got round to this again - sorry for the delay.
AFAICT, the quilt patch doesn't apply cleanly neither in Xenial nor in Yakkety. I don't know if I have a tooling problem here though. I don't remember if I verified this when I reviewed. Please could you take a look?
Both merge requests updated with fixed patches for when other patches applied.
Marking yakkety invalid as it is now EOL. Working to get xenial merge unstuck.
Sponsored via git workflow.
Hello ahsdkjhkbvnmxcv, or anyone else affected,
Accepted mongod:/
Looks like the test starting mongod during the build on armhf failed with:
[Errno 111] Connection refused
[Errno 111] Connection refused
timeout starting mongod
https:/
This succeeds on every other arch and I did not see any issues with it on the latest rebuild document:
http://
Can a rebuild be attempted?
I've retried the build just now.
it appears to be this bug https:/
/jira.mongodb. org/browse/ SERVER- 14843 | https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1584431 | CC-MAIN-2018-26 | refinedweb | 537 | 57.57 |
Greetings,
I need help with a Simple C program.
I have to take input data such as this :
Write a C program that will process a series of "words"
entered by the user. A word (for the purpose of this
assignment) is defined as a sequence of non-white-space
characters (anything other than a space, a tab, or a
newline). The following activities should be performed
in the program, and on the data entered by the user.
Does all of this sound right? You betcha!
and be able to count how many words are in it (66) , the longest word, how many capitalized words, how many sentences, the longest sentence, and the average length of words. We havent gotten in to any type of advanced coding so anything more complex than basic while-loops probly wont be of any help :(
Any help would be fine, this is my code so far.....its not saying much though /cry
#include <iostream> int main() { string word; int wordCount = 0; string line; cout << "Enter Your Lines:" << endl << endl; while (getline(cin,line)) endl; return 0; } | https://www.daniweb.com/programming/software-development/threads/118126/help-with-simple-c-program | CC-MAIN-2018-26 | refinedweb | 182 | 74.53 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.